Test Report: Hyperkit_macOS 19910

                    
                      0805a48cef53763875eefc0e18e5d59dcaccd8a0:2024-11-05:36955
                    
                

Test fail (19/221)

x
+
TestOffline (195.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-052000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-052000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.694077551s)

                                                
                                                
-- stdout --
	* [offline-docker-052000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-052000" primary control-plane node in "offline-docker-052000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-052000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:41:15.480140   22692 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:41:15.480412   22692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:41:15.480419   22692 out.go:358] Setting ErrFile to fd 2...
	I1105 10:41:15.480425   22692 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:41:15.480675   22692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:41:15.482682   22692 out.go:352] Setting JSON to false
	I1105 10:41:15.512359   22692 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9644,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:41:15.512525   22692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:41:15.536661   22692 out.go:177] * [offline-docker-052000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:41:15.584812   22692 notify.go:220] Checking for updates...
	I1105 10:41:15.621882   22692 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:41:15.678535   22692 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:41:15.707644   22692 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:41:15.728586   22692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:41:15.749620   22692 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:41:15.770683   22692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:41:15.791793   22692 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:41:15.823604   22692 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:41:15.865567   22692 start.go:297] selected driver: hyperkit
	I1105 10:41:15.865585   22692 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:41:15.865597   22692 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:41:15.870992   22692 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:41:15.871143   22692 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:41:15.882529   22692 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:41:15.888939   22692 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:41:15.888966   22692 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:41:15.889004   22692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:41:15.889242   22692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:41:15.889277   22692 cni.go:84] Creating CNI manager for ""
	I1105 10:41:15.889312   22692 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 10:41:15.889318   22692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 10:41:15.889387   22692 start.go:340] cluster config:
	{Name:offline-docker-052000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-052000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:41:15.889477   22692 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:41:15.931504   22692 out.go:177] * Starting "offline-docker-052000" primary control-plane node in "offline-docker-052000" cluster
	I1105 10:41:15.952556   22692 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:41:15.952603   22692 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:41:15.952620   22692 cache.go:56] Caching tarball of preloaded images
	I1105 10:41:15.952737   22692 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:41:15.952747   22692 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:41:15.953013   22692 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/offline-docker-052000/config.json ...
	I1105 10:41:15.953034   22692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/offline-docker-052000/config.json: {Name:mk9fdda8fae12c7e321c3f99f794ea6642e7e3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:41:15.953470   22692 start.go:360] acquireMachinesLock for offline-docker-052000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:41:15.953551   22692 start.go:364] duration metric: took 66.621µs to acquireMachinesLock for "offline-docker-052000"
	I1105 10:41:15.953573   22692 start.go:93] Provisioning new machine with config: &{Name:offline-docker-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-052000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:41:15.953621   22692 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:41:15.974893   22692 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:41:15.975104   22692 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:41:15.975147   22692 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:41:15.986138   22692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60745
	I1105 10:41:15.986472   22692 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:41:15.986923   22692 main.go:141] libmachine: Using API Version  1
	I1105 10:41:15.986934   22692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:41:15.987148   22692 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:41:15.987241   22692 main.go:141] libmachine: (offline-docker-052000) Calling .GetMachineName
	I1105 10:41:15.987335   22692 main.go:141] libmachine: (offline-docker-052000) Calling .DriverName
	I1105 10:41:15.987449   22692 start.go:159] libmachine.API.Create for "offline-docker-052000" (driver="hyperkit")
	I1105 10:41:15.987470   22692 client.go:168] LocalClient.Create starting
	I1105 10:41:15.987504   22692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:41:15.987566   22692 main.go:141] libmachine: Decoding PEM data...
	I1105 10:41:15.987581   22692 main.go:141] libmachine: Parsing certificate...
	I1105 10:41:15.987664   22692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:41:15.987711   22692 main.go:141] libmachine: Decoding PEM data...
	I1105 10:41:15.987724   22692 main.go:141] libmachine: Parsing certificate...
	I1105 10:41:15.987737   22692 main.go:141] libmachine: Running pre-create checks...
	I1105 10:41:15.987744   22692 main.go:141] libmachine: (offline-docker-052000) Calling .PreCreateCheck
	I1105 10:41:15.987838   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:15.988062   22692 main.go:141] libmachine: (offline-docker-052000) Calling .GetConfigRaw
	I1105 10:41:15.995904   22692 main.go:141] libmachine: Creating machine...
	I1105 10:41:15.995919   22692 main.go:141] libmachine: (offline-docker-052000) Calling .Create
	I1105 10:41:15.996052   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:15.996251   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:41:15.996033   22712 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:41:15.996335   22692 main.go:141] libmachine: (offline-docker-052000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:41:16.476783   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:41:16.476684   22712 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/id_rsa...
	I1105 10:41:16.636185   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:41:16.636131   22712 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk...
	I1105 10:41:16.636199   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Writing magic tar header
	I1105 10:41:16.636240   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Writing SSH key tar header
	I1105 10:41:16.636592   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:41:16.636548   22712 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000 ...
	I1105 10:41:17.018776   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:17.018803   22692 main.go:141] libmachine: (offline-docker-052000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid
	I1105 10:41:17.018817   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Using UUID b279da9a-951e-4963-8b59-905bcf8b5b4a
	I1105 10:41:17.128271   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Generated MAC be:03:33:04:c8:af
	I1105 10:41:17.128304   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000
	I1105 10:41:17.128338   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b279da9a-951e-4963-8b59-905bcf8b5b4a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005121b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1105 10:41:17.128373   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b279da9a-951e-4963-8b59-905bcf8b5b4a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0005121b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1105 10:41:17.128430   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b279da9a-951e-4963-8b59-905bcf8b5b4a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bz
image,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000"}
	I1105 10:41:17.128480   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b279da9a-951e-4963-8b59-905bcf8b5b4a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikub
e/machines/offline-docker-052000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000"
	I1105 10:41:17.128494   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:41:17.131843   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 DEBUG: hyperkit: Pid is 22733
	I1105 10:41:17.132405   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 0
	I1105 10:41:17.132424   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:17.132518   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:17.133751   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:17.133991   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:17.134006   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:17.134031   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:17.134057   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:17.134070   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:17.134086   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:17.134098   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:17.134112   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:17.134133   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:17.134146   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:17.134172   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:17.134187   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:17.134201   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:17.134206   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:17.134212   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:17.134217   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:17.134231   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:17.134236   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:17.134243   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:17.134253   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:17.142712   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:41:17.199604   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:41:17.200474   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:41:17.200497   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:41:17.200509   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:41:17.200532   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:41:17.588893   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:41:17.588905   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:41:17.703862   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:41:17.703894   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:41:17.703942   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:41:17.703961   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:41:17.704733   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:41:17.704743   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:17 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:41:19.136307   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 1
	I1105 10:41:19.136322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:19.136366   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:19.137379   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:19.137492   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:19.137520   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:19.137568   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:19.137596   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:19.137650   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:19.137667   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:19.137688   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:19.137701   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:19.137713   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:19.137729   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:19.137741   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:19.137754   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:19.137795   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:19.137832   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:19.137866   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:19.137881   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:19.137895   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:19.137907   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:19.137917   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:19.137941   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:21.138403   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 2
	I1105 10:41:21.138429   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:21.138489   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:21.139463   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:21.139557   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:21.139570   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:21.139584   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:21.139592   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:21.139602   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:21.139612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:21.139623   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:21.139629   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:21.139639   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:21.139648   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:21.139665   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:21.139677   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:21.139688   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:21.139693   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:21.139700   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:21.139708   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:21.139716   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:21.139723   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:21.139730   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:21.139738   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:23.084061   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:41:23.084189   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:41:23.084211   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:41:23.104284   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:41:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:41:23.139785   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 3
	I1105 10:41:23.139801   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:23.139877   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:23.140880   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:23.140968   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:23.140978   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:23.140988   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:23.140993   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:23.141000   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:23.141005   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:23.141021   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:23.141042   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:23.141050   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:23.141056   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:23.141063   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:23.141071   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:23.141080   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:23.141088   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:23.141095   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:23.141103   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:23.141110   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:23.141119   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:23.141128   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:23.141135   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:25.141815   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 4
	I1105 10:41:25.141836   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:25.141930   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:25.142909   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:25.143046   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:25.143056   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:25.143069   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:25.143081   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:25.143093   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:25.143102   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:25.143113   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:25.143120   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:25.143126   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:25.143133   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:25.143142   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:25.143150   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:25.143157   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:25.143165   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:25.143172   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:25.143177   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:25.143187   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:25.143194   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:25.143211   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:25.143220   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:27.143339   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 5
	I1105 10:41:27.143355   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:27.143468   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:27.144405   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:27.144510   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:27.144520   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:27.144529   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:27.144545   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:27.144556   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:27.144562   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:27.144569   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:27.144576   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:27.144592   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:27.144605   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:27.144613   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:27.144620   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:27.144627   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:27.144635   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:27.144642   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:27.144667   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:27.144684   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:27.144696   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:27.144719   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:27.144728   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:29.146734   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 6
	I1105 10:41:29.146747   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:29.146784   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:29.147703   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:29.147796   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:29.147808   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:29.147830   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:29.147843   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:29.147859   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:29.147867   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:29.147875   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:29.147881   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:29.147901   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:29.147914   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:29.147922   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:29.147930   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:29.147947   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:29.147959   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:29.147968   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:29.147978   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:29.147984   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:29.147990   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:29.147997   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:29.148004   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:31.150007   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 7
	I1105 10:41:31.150020   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:31.150084   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:31.151200   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:31.151260   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:31.151268   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:31.151286   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:31.151292   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:31.151298   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:31.151303   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:31.151310   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:31.151315   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:31.151338   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:31.151349   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:31.151359   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:31.151388   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:31.151395   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:31.151404   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:31.151411   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:31.151416   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:31.151428   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:31.151441   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:31.151461   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:31.151470   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:33.151963   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 8
	I1105 10:41:33.151979   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:33.152050   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:33.153021   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:33.153112   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:33.153121   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:33.153131   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:33.153136   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:33.153142   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:33.153147   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:33.153154   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:33.153189   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:33.153201   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:33.153232   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:33.153241   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:33.153253   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:33.153263   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:33.153270   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:33.153275   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:33.153282   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:33.153287   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:33.153293   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:33.153305   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:33.153313   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:35.154571   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 9
	I1105 10:41:35.154584   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:35.154639   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:35.155596   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:35.155677   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:35.155687   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:35.155696   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:35.155703   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:35.155716   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:35.155728   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:35.155746   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:35.155752   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:35.155758   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:35.155769   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:35.155777   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:35.155784   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:35.155792   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:35.155800   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:35.155814   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:35.155825   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:35.155833   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:35.155841   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:35.155849   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:35.155856   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:37.157508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 10
	I1105 10:41:37.157523   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:37.157577   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:37.158529   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:37.158603   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:37.158612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:37.158620   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:37.158627   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:37.158633   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:37.158644   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:37.158662   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:37.158678   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:37.158689   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:37.158695   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:37.158704   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:37.158710   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:37.158722   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:37.158734   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:37.158749   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:37.158771   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:37.158789   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:37.158797   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:37.158805   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:37.158813   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:39.158975   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 11
	I1105 10:41:39.158991   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:39.159049   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:39.160015   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:39.160121   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:39.160131   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:39.160140   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:39.160146   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:39.160152   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:39.160158   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:39.160172   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:39.160181   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:39.160189   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:39.160197   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:39.160204   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:39.160210   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:39.160228   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:39.160237   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:39.160245   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:39.160257   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:39.160269   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:39.160279   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:39.160312   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:39.160322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:41.162259   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 12
	I1105 10:41:41.162276   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:41.162359   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:41.163323   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:41.163449   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:41.163461   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:41.163467   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:41.163473   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:41.163478   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:41.163486   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:41.163491   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:41.163497   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:41.163505   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:41.163515   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:41.163521   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:41.163538   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:41.163563   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:41.163572   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:41.163579   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:41.163589   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:41.163603   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:41.163620   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:41.163637   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:41.163650   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:43.164654   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 13
	I1105 10:41:43.164669   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:43.164747   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:43.165690   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:43.165759   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:43.165771   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:43.165779   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:43.165784   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:43.165790   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:43.165796   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:43.165808   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:43.165819   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:43.165826   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:43.165831   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:43.165838   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:43.165843   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:43.165851   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:43.165857   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:43.165864   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:43.165872   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:43.165880   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:43.165888   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:43.165905   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:43.165916   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:45.166416   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 14
	I1105 10:41:45.166432   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:45.166510   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:45.167432   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:45.167518   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:45.167525   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:45.167533   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:45.167538   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:45.167562   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:45.167568   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:45.167586   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:45.167596   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:45.167605   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:45.167613   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:45.167621   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:45.167628   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:45.167635   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:45.167641   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:45.167647   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:45.167653   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:45.167659   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:45.167666   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:45.167681   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:45.167700   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:47.168480   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 15
	I1105 10:41:47.168494   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:47.168540   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:47.169503   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:47.169588   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:47.169600   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:47.169609   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:47.169616   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:47.169622   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:47.169629   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:47.169644   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:47.169651   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:47.169658   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:47.169663   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:47.169679   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:47.169690   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:47.169698   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:47.169707   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:47.169723   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:47.169736   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:47.169744   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:47.169752   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:47.169759   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:47.169765   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:49.171090   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 16
	I1105 10:41:49.171108   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:49.171165   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:49.172121   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:49.172193   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:49.172205   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:49.172216   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:49.172224   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:49.172231   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:49.172237   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:49.172246   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:49.172253   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:49.172259   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:49.172266   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:49.172273   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:49.172280   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:49.172289   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:49.172304   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:49.172319   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:49.172330   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:49.172338   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:49.172347   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:49.172355   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:49.172367   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:51.172691   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 17
	I1105 10:41:51.172705   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:51.172776   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:51.173730   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:51.173823   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:51.173833   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:51.173839   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:51.173845   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:51.173853   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:51.173859   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:51.173871   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:51.173877   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:51.173893   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:51.173900   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:51.173907   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:51.173916   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:51.173926   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:51.173934   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:51.173940   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:51.173947   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:51.173960   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:51.173968   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:51.173977   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:51.173985   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:53.175401   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 18
	I1105 10:41:53.175413   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:53.175470   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:53.176406   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:53.176497   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:53.176512   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:53.176525   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:53.176551   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:53.176561   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:53.176577   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:53.176584   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:53.176592   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:53.176598   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:53.176606   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:53.176614   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:53.176630   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:53.176643   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:53.176658   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:53.176667   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:53.176673   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:53.176684   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:53.176691   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:53.176699   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:53.176708   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:55.178493   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 19
	I1105 10:41:55.178508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:55.178581   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:55.179511   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:55.179596   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:55.179604   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:55.179612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:55.179618   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:55.179624   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:55.179629   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:55.179645   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:55.179658   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:55.179665   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:55.179676   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:55.179682   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:55.179691   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:55.179696   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:55.179707   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:55.179720   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:55.179739   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:55.179747   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:55.179754   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:55.179761   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:55.179770   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:57.181789   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 20
	I1105 10:41:57.181803   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:57.181869   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:57.182800   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:57.182897   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:57.182927   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:57.182939   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:57.182949   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:57.182966   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:57.182975   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:57.182984   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:57.182991   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:57.182997   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:57.183003   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:57.183009   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:57.183017   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:57.183028   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:57.183039   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:57.183048   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:57.183056   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:57.183070   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:57.183083   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:57.183090   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:57.183098   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:41:59.183522   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 21
	I1105 10:41:59.183537   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:41:59.183557   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:41:59.184496   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:41:59.184604   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:41:59.184614   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:41:59.184621   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:41:59.184630   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:41:59.184636   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:41:59.184643   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:41:59.184649   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:41:59.184658   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:41:59.184675   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:41:59.184683   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:41:59.184697   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:41:59.184707   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:41:59.184714   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:41:59.184722   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:41:59.184736   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:41:59.184746   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:41:59.184758   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:41:59.184767   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:41:59.184774   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:41:59.184782   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:01.186762   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 22
	I1105 10:42:01.186776   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:01.186827   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:01.187801   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:01.187871   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:01.187878   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:01.187888   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:01.187894   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:01.187910   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:01.187921   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:01.187932   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:01.187955   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:01.187971   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:01.187980   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:01.187987   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:01.187995   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:01.188001   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:01.188008   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:01.188015   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:01.188022   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:01.188028   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:01.188035   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:01.188045   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:01.188053   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:03.188229   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 23
	I1105 10:42:03.188245   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:03.188307   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:03.189270   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:03.189344   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:03.189360   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:03.189390   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:03.189408   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:03.189422   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:03.189431   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:03.189446   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:03.189459   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:03.189473   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:03.189480   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:03.189488   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:03.189494   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:03.189502   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:03.189511   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:03.189518   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:03.189525   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:03.189533   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:03.189541   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:03.189548   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:03.189555   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:05.189706   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 24
	I1105 10:42:05.189720   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:05.189778   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:05.190726   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:05.190829   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:05.190844   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:05.190873   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:05.190881   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:05.190896   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:05.190908   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:05.190924   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:05.190936   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:05.190947   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:05.190959   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:05.190966   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:05.190979   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:05.190988   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:05.190993   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:05.190999   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:05.191006   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:05.191012   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:05.191017   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:05.191030   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:05.191054   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:07.193050   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 25
	I1105 10:42:07.193063   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:07.193134   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:07.194065   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:07.194159   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:07.194167   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:07.194177   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:07.194188   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:07.194197   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:07.194203   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:07.194222   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:07.194229   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:07.194236   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:07.194242   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:07.194264   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:07.194277   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:07.194300   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:07.194312   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:07.194322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:07.194333   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:07.194340   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:07.194347   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:07.194354   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:07.194377   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:09.196369   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 26
	I1105 10:42:09.196383   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:09.196407   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:09.197385   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:09.197472   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:09.197483   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:09.197492   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:09.197500   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:09.197508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:09.197514   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:09.197521   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:09.197527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:09.197539   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:09.197552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:09.197562   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:09.197571   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:09.197580   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:09.197587   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:09.197599   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:09.197612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:09.197620   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:09.197646   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:09.197654   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:09.197662   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:11.198151   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 27
	I1105 10:42:11.198164   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:11.198230   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:11.199171   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:11.199251   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:11.199261   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:11.199270   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:11.199275   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:11.199281   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:11.199286   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:11.199292   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:11.199298   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:11.199325   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:11.199339   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:11.199348   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:11.199355   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:11.199367   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:11.199375   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:11.199382   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:11.199390   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:11.199403   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:11.199412   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:11.199431   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:11.199443   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:13.200125   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 28
	I1105 10:42:13.200140   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:13.200195   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:13.201170   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:13.201223   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:13.201232   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:13.201240   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:13.201249   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:13.201257   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:13.201264   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:13.201282   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:13.201291   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:13.201310   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:13.201319   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:13.201328   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:13.201337   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:13.201344   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:13.201354   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:13.201361   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:13.201368   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:13.201386   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:13.201398   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:13.201410   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:13.201418   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:15.203450   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 29
	I1105 10:42:15.203466   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:15.203533   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:15.204477   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for be:03:33:04:c8:af in /var/db/dhcpd_leases ...
	I1105 10:42:15.204564   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:15.204574   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:15.204590   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:15.204612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:15.204621   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:15.204632   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:15.204642   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:15.204665   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:15.204678   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:15.204694   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:15.204712   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:15.204726   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:15.204734   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:15.204751   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:15.204762   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:15.204770   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:15.204777   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:15.204782   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:15.204789   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:15.204795   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:17.206833   22692 client.go:171] duration metric: took 1m1.218251719s to LocalClient.Create
	I1105 10:42:19.209012   22692 start.go:128] duration metric: took 1m3.254227174s to createHost
	I1105 10:42:19.209026   22692 start.go:83] releasing machines lock for "offline-docker-052000", held for 1m3.254330472s
	W1105 10:42:19.209040   22692 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:03:33:04:c8:af
	I1105 10:42:19.209370   22692 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:42:19.209394   22692 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:42:19.220850   22692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60781
	I1105 10:42:19.221169   22692 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:42:19.221529   22692 main.go:141] libmachine: Using API Version  1
	I1105 10:42:19.221543   22692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:42:19.221779   22692 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:42:19.222177   22692 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:42:19.222208   22692 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:42:19.233053   22692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60783
	I1105 10:42:19.233381   22692 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:42:19.233712   22692 main.go:141] libmachine: Using API Version  1
	I1105 10:42:19.233723   22692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:42:19.233935   22692 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:42:19.234048   22692 main.go:141] libmachine: (offline-docker-052000) Calling .GetState
	I1105 10:42:19.234154   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.234216   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:19.235380   22692 main.go:141] libmachine: (offline-docker-052000) Calling .DriverName
	I1105 10:42:19.293485   22692 out.go:177] * Deleting "offline-docker-052000" in hyperkit ...
	I1105 10:42:19.314419   22692 main.go:141] libmachine: (offline-docker-052000) Calling .Remove
	I1105 10:42:19.314563   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.314573   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.314639   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:19.315772   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.315838   22692 main.go:141] libmachine: (offline-docker-052000) DBG | waiting for graceful shutdown
	I1105 10:42:20.318011   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:20.318107   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:20.319252   22692 main.go:141] libmachine: (offline-docker-052000) DBG | waiting for graceful shutdown
	I1105 10:42:21.319687   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:21.319783   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:21.321165   22692 main.go:141] libmachine: (offline-docker-052000) DBG | waiting for graceful shutdown
	I1105 10:42:22.322192   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:22.322275   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:22.323069   22692 main.go:141] libmachine: (offline-docker-052000) DBG | waiting for graceful shutdown
	I1105 10:42:23.323828   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:23.323914   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:23.325067   22692 main.go:141] libmachine: (offline-docker-052000) DBG | waiting for graceful shutdown
	I1105 10:42:24.326514   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:24.326575   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22733
	I1105 10:42:24.327261   22692 main.go:141] libmachine: (offline-docker-052000) DBG | sending sigkill
	I1105 10:42:24.327268   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1105 10:42:24.342006   22692 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:03:33:04:c8:af
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:03:33:04:c8:af
	I1105 10:42:24.342025   22692 start.go:729] Will try again in 5 seconds ...
	I1105 10:42:24.355813   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:42:24 WARN : hyperkit: failed to read stdout: EOF
	I1105 10:42:24.355829   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:42:24 WARN : hyperkit: failed to read stderr: EOF
	I1105 10:42:29.342844   22692 start.go:360] acquireMachinesLock for offline-docker-052000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:43:22.045442   22692 start.go:364] duration metric: took 52.701616077s to acquireMachinesLock for "offline-docker-052000"
	I1105 10:43:22.045485   22692 start.go:93] Provisioning new machine with config: &{Name:offline-docker-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-052000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:43:22.045542   22692 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:43:22.067305   22692 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:43:22.067396   22692 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:43:22.067421   22692 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:43:22.078443   22692 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60791
	I1105 10:43:22.078765   22692 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:43:22.079136   22692 main.go:141] libmachine: Using API Version  1
	I1105 10:43:22.079157   22692 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:43:22.079388   22692 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:43:22.079505   22692 main.go:141] libmachine: (offline-docker-052000) Calling .GetMachineName
	I1105 10:43:22.079610   22692 main.go:141] libmachine: (offline-docker-052000) Calling .DriverName
	I1105 10:43:22.079724   22692 start.go:159] libmachine.API.Create for "offline-docker-052000" (driver="hyperkit")
	I1105 10:43:22.079741   22692 client.go:168] LocalClient.Create starting
	I1105 10:43:22.079766   22692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:43:22.079827   22692 main.go:141] libmachine: Decoding PEM data...
	I1105 10:43:22.079839   22692 main.go:141] libmachine: Parsing certificate...
	I1105 10:43:22.079877   22692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:43:22.079925   22692 main.go:141] libmachine: Decoding PEM data...
	I1105 10:43:22.079938   22692 main.go:141] libmachine: Parsing certificate...
	I1105 10:43:22.079951   22692 main.go:141] libmachine: Running pre-create checks...
	I1105 10:43:22.079956   22692 main.go:141] libmachine: (offline-docker-052000) Calling .PreCreateCheck
	I1105 10:43:22.080037   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.080079   22692 main.go:141] libmachine: (offline-docker-052000) Calling .GetConfigRaw
	I1105 10:43:22.135129   22692 main.go:141] libmachine: Creating machine...
	I1105 10:43:22.135152   22692 main.go:141] libmachine: (offline-docker-052000) Calling .Create
	I1105 10:43:22.135244   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.135409   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:43:22.135239   22889 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:43:22.135468   22692 main.go:141] libmachine: (offline-docker-052000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:43:22.327588   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:43:22.327506   22889 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/id_rsa...
	I1105 10:43:22.411851   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:43:22.411776   22889 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk...
	I1105 10:43:22.411865   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Writing magic tar header
	I1105 10:43:22.411874   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Writing SSH key tar header
	I1105 10:43:22.412451   22692 main.go:141] libmachine: (offline-docker-052000) DBG | I1105 10:43:22.412411   22889 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000 ...
	I1105 10:43:22.795977   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.796004   22692 main.go:141] libmachine: (offline-docker-052000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid
	I1105 10:43:22.796017   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Using UUID 11583e33-a518-4fdd-a72c-4e48a104ce3d
	I1105 10:43:22.822759   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Generated MAC a2:7c:e6:ed:e4:80
	I1105 10:43:22.822781   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000
	I1105 10:43:22.822824   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11583e33-a518-4fdd-a72c-4e48a104ce3d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1105 10:43:22.822864   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11583e33-a518-4fdd-a72c-4e48a104ce3d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1105 10:43:22.822951   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11583e33-a518-4fdd-a72c-4e48a104ce3d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bz
image,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000"}
	I1105 10:43:22.823002   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11583e33-a518-4fdd-a72c-4e48a104ce3d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/offline-docker-052000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikub
e/machines/offline-docker-052000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-052000"
	I1105 10:43:22.823016   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:43:22.826145   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 DEBUG: hyperkit: Pid is 22891
	I1105 10:43:22.826592   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 0
	I1105 10:43:22.826606   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.826678   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:22.828159   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:22.828315   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:22.828332   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:22.828342   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:22.828359   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:22.828376   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:22.828391   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:22.828416   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:22.828432   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:22.828446   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:22.828458   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:22.828475   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:22.828487   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:22.828498   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:22.828515   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:22.828527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:22.828541   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:22.828551   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:22.828559   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:22.828585   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:22.828596   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:22.836892   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:43:22.845998   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/offline-docker-052000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:43:22.847148   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:43:22.847194   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:43:22.847210   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:43:22.847224   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:43:23.235835   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:43:23.235850   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:43:23.350495   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:43:23.350514   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:43:23.350552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:43:23.350567   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:43:23.351380   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:43:23.351411   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:43:24.829227   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 1
	I1105 10:43:24.829242   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:24.829331   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:24.830319   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:24.830392   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:24.830401   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:24.830409   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:24.830414   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:24.830420   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:24.830425   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:24.830431   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:24.830437   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:24.830459   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:24.830472   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:24.830481   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:24.830489   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:24.830512   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:24.830527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:24.830537   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:24.830546   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:24.830553   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:24.830558   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:24.830564   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:24.830571   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:26.832635   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 2
	I1105 10:43:26.832651   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:26.832702   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:26.833680   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:26.833736   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:26.833749   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:26.833783   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:26.833793   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:26.833800   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:26.833807   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:26.833814   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:26.833820   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:26.833834   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:26.833844   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:26.833853   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:26.833861   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:26.833875   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:26.833886   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:26.833895   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:26.833901   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:26.833914   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:26.833922   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:26.833932   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:26.833937   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:28.712623   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:43:28.712767   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:43:28.712778   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:43:28.732610   22692 main.go:141] libmachine: (offline-docker-052000) DBG | 2024/11/05 10:43:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:43:28.836115   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 3
	I1105 10:43:28.836141   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:28.836352   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:28.838128   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:28.838320   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:28.838334   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:28.838344   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:28.838351   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:28.838360   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:28.838367   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:28.838388   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:28.838403   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:28.838413   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:28.838424   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:28.838459   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:28.838477   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:28.838489   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:28.838521   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:28.838540   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:28.838552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:28.838576   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:28.838600   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:28.838612   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:28.838623   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:30.838551   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 4
	I1105 10:43:30.838568   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:30.838634   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:30.839616   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:30.839726   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:30.839737   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:30.839746   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:30.839751   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:30.839788   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:30.839801   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:30.839809   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:30.839817   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:30.839830   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:30.839841   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:30.839855   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:30.839864   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:30.839870   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:30.839877   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:30.839887   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:30.839895   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:30.839902   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:30.839910   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:30.839916   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:30.839930   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:32.841957   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 5
	I1105 10:43:32.841984   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:32.841998   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:32.843074   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:32.843114   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:32.843127   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:32.843137   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:32.843144   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:32.843150   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:32.843159   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:32.843172   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:32.843180   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:32.843199   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:32.843207   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:32.843217   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:32.843226   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:32.843250   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:32.843261   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:32.843272   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:32.843281   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:32.843288   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:32.843313   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:32.843328   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:32.843341   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:34.844795   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 6
	I1105 10:43:34.844810   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:34.844820   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:34.845811   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:34.845904   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:34.845917   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:34.845923   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:34.845931   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:34.845943   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:34.845952   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:34.845974   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:34.845989   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:34.846009   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:34.846020   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:34.846028   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:34.846036   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:34.846050   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:34.846064   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:34.846072   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:34.846079   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:34.846086   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:34.846094   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:34.846100   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:34.846109   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:36.846188   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 7
	I1105 10:43:36.846202   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:36.846277   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:36.847239   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:36.847321   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:36.847332   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:36.847339   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:36.847346   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:36.847352   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:36.847366   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:36.847378   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:36.847387   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:36.847395   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:36.847416   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:36.847427   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:36.847435   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:36.847442   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:36.847449   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:36.847457   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:36.847472   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:36.847483   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:36.847492   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:36.847500   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:36.847508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:38.849533   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 8
	I1105 10:43:38.849549   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:38.849613   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:38.850550   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:38.850659   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:38.850667   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:38.850674   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:38.850679   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:38.850685   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:38.850692   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:38.850698   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:38.850705   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:38.850722   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:38.850735   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:38.850749   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:38.850755   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:38.850769   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:38.850777   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:38.850795   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:38.850808   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:38.850827   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:38.850838   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:38.850859   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:38.850869   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:40.851354   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 9
	I1105 10:43:40.851369   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:40.851472   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:40.852427   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:40.852482   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:40.852497   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:40.852505   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:40.852520   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:40.852527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:40.852541   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:40.852557   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:40.852566   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:40.852574   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:40.852581   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:40.852588   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:40.852650   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:40.852679   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:40.852688   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:40.852694   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:40.852706   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:40.852715   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:40.852732   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:40.852744   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:40.852754   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:42.854619   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 10
	I1105 10:43:42.854637   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:42.854655   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:42.855614   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:42.855700   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:42.855716   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:42.855725   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:42.855734   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:42.855741   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:42.855747   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:42.855755   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:42.855760   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:42.855767   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:42.855777   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:42.855783   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:42.855789   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:42.855800   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:42.855811   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:42.855825   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:42.855838   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:42.855852   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:42.855859   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:42.855866   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:42.855874   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:44.856984   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 11
	I1105 10:43:44.856996   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:44.857037   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:44.857969   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:44.858067   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:44.858078   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:44.858091   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:44.858099   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:44.858107   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:44.858112   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:44.858127   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:44.858139   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:44.858150   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:44.858161   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:44.858190   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:44.858203   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:44.858214   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:44.858224   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:44.858231   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:44.858238   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:44.858256   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:44.858268   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:44.858277   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:44.858289   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:46.860323   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 12
	I1105 10:43:46.860336   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:46.860345   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:46.861327   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:46.861412   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:46.861421   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:46.861429   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:46.861445   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:46.861461   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:46.861474   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:46.861481   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:46.861487   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:46.861504   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:46.861513   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:46.861520   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:46.861528   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:46.861535   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:46.861543   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:46.861555   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:46.861565   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:46.861575   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:46.861582   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:46.861588   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:46.861594   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:48.862917   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 13
	I1105 10:43:48.862933   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:48.863035   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:48.863978   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:48.864072   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:48.864084   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:48.864113   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:48.864126   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:48.864146   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:48.864166   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:48.864176   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:48.864184   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:48.864199   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:48.864212   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:48.864220   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:48.864228   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:48.864235   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:48.864245   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:48.864258   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:48.864267   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:48.864290   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:48.864304   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:48.864312   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:48.864322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:50.865586   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 14
	I1105 10:43:50.865598   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:50.865673   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:50.866628   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:50.866716   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:50.866725   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:50.866741   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:50.866747   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:50.866753   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:50.866761   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:50.866768   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:50.866774   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:50.866784   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:50.866793   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:50.866809   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:50.866821   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:50.866840   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:50.866848   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:50.866856   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:50.866863   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:50.866870   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:50.866876   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:50.866882   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:50.866890   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:52.866948   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 15
	I1105 10:43:52.866964   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:52.867048   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:52.868007   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:52.868073   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:52.868087   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:52.868097   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:52.868106   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:52.868127   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:52.868139   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:52.868146   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:52.868152   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:52.868164   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:52.868174   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:52.868190   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:52.868204   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:52.868225   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:52.868237   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:52.868248   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:52.868255   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:52.868261   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:52.868267   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:52.868276   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:52.868295   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:54.870308   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 16
	I1105 10:43:54.870323   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:54.870392   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:54.871374   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:54.871436   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:54.871446   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:54.871455   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:54.871471   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:54.871487   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:54.871497   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:54.871514   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:54.871531   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:54.871543   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:54.871552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:54.871562   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:54.871568   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:54.871575   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:54.871583   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:54.871598   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:54.871610   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:54.871618   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:54.871634   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:54.871642   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:54.871649   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:56.872287   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 17
	I1105 10:43:56.872300   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:56.872351   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:56.873322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:56.873416   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:56.873426   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:56.873443   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:56.873449   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:56.873459   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:56.873465   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:56.873471   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:56.873477   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:56.873483   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:56.873489   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:56.873496   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:56.873504   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:56.873521   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:56.873534   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:56.873544   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:56.873552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:56.873564   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:56.873573   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:56.873589   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:56.873602   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:58.873957   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 18
	I1105 10:43:58.873972   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:58.874041   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:43:58.875031   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:43:58.875099   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:58.875112   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:58.875148   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:58.875162   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:58.875178   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:58.875189   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:58.875197   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:58.875202   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:58.875210   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:58.875219   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:58.875227   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:58.875234   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:58.875243   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:58.875254   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:58.875261   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:58.875271   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:58.875280   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:58.875292   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:58.875304   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:58.875314   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:00.875966   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 19
	I1105 10:44:00.875982   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:00.876041   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:00.877005   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:00.877096   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:00.877107   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:00.877117   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:00.877123   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:00.877130   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:00.877135   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:00.877141   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:00.877153   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:00.877160   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:00.877166   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:00.877173   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:00.877180   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:00.877190   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:00.877198   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:00.877208   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:00.877215   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:00.877228   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:00.877245   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:00.877256   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:00.877275   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:02.879353   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 20
	I1105 10:44:02.879370   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:02.879390   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:02.880374   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:02.880446   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:02.880457   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:02.880475   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:02.880484   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:02.880492   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:02.880500   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:02.880508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:02.880515   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:02.880528   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:02.880535   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:02.880543   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:02.880559   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:02.880568   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:02.880586   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:02.880593   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:02.880600   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:02.880609   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:02.880616   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:02.880621   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:02.880639   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:04.882447   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 21
	I1105 10:44:04.882460   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:04.882520   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:04.883491   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:04.883553   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:04.883561   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:04.883572   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:04.883589   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:04.883595   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:04.883602   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:04.883611   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:04.883617   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:04.883624   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:04.883647   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:04.883663   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:04.883676   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:04.883684   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:04.883692   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:04.883708   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:04.883716   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:04.883723   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:04.883731   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:04.883737   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:04.883745   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:06.885759   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 22
	I1105 10:44:06.885775   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:06.885826   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:06.886810   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:06.886887   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:06.886898   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:06.886906   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:06.886912   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:06.886918   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:06.886923   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:06.886938   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:06.886953   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:06.886969   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:06.886981   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:06.886997   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:06.887006   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:06.887013   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:06.887020   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:06.887032   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:06.887039   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:06.887046   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:06.887055   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:06.887061   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:06.887075   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:08.889113   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 23
	I1105 10:44:08.889126   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:08.889173   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:08.890160   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:08.890233   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:08.890243   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:08.890249   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:08.890255   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:08.890262   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:08.890267   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:08.890273   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:08.890292   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:08.890301   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:08.890307   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:08.890313   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:08.890319   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:08.890325   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:08.890332   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:08.890338   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:08.890344   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:08.890351   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:08.890365   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:08.890373   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:08.890382   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:10.891058   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 24
	I1105 10:44:10.891074   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:10.891141   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:10.892072   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:10.892179   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:10.892192   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:10.892199   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:10.892207   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:10.892219   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:10.892233   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:10.892249   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:10.892260   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:10.892268   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:10.892274   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:10.892281   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:10.892288   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:10.892295   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:10.892303   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:10.892309   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:10.892316   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:10.892322   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:10.892330   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:10.892346   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:10.892358   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:12.893928   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 25
	I1105 10:44:12.893952   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:12.893988   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:12.894943   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:12.895028   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:12.895036   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:12.895045   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:12.895050   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:12.895068   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:12.895078   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:12.895086   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:12.895092   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:12.895107   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:12.895118   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:12.895140   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:12.895153   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:12.895163   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:12.895172   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:12.895179   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:12.895186   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:12.895193   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:12.895201   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:12.895224   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:12.895242   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:14.896437   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 26
	I1105 10:44:14.896452   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:14.896501   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:14.897443   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:14.897527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:14.897537   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:14.897546   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:14.897552   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:14.897559   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:14.897565   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:14.897582   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:14.897597   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:14.897614   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:14.897639   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:14.897651   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:14.897661   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:14.897671   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:14.897678   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:14.897693   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:14.897706   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:14.897720   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:14.897731   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:14.897741   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:14.897763   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:16.899780   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 27
	I1105 10:44:16.899796   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:16.899836   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:16.900794   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:16.900855   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:16.900893   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:16.900911   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:16.900921   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:16.900936   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:16.900947   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:16.900955   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:16.900962   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:16.900970   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:16.900976   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:16.900988   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:16.901003   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:16.901014   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:16.901021   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:16.901028   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:16.901037   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:16.901052   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:16.901066   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:16.901075   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:16.901081   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:18.901408   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 28
	I1105 10:44:18.901424   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:18.901433   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:18.902397   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:18.902479   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:18.902499   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:18.902508   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:18.902515   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:18.902522   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:18.902527   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:18.902538   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:18.902545   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:18.902565   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:18.902585   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:18.902594   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:18.902602   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:18.902609   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:18.902615   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:18.902627   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:18.902644   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:18.902653   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:18.902660   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:18.902668   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:18.902674   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:20.903170   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Attempt 29
	I1105 10:44:20.903185   22692 main.go:141] libmachine: (offline-docker-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:20.903265   22692 main.go:141] libmachine: (offline-docker-052000) DBG | hyperkit pid from json: 22891
	I1105 10:44:20.904208   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Searching for a2:7c:e6:ed:e4:80 in /var/db/dhcpd_leases ...
	I1105 10:44:20.904302   22692 main.go:141] libmachine: (offline-docker-052000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:20.904312   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:20.904319   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:20.904335   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:20.904343   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:20.904349   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:20.904355   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:20.904374   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:20.904383   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:20.904391   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:20.904398   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:20.904405   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:20.904411   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:20.904419   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:20.904427   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:20.904436   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:20.904443   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:20.904450   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:20.904467   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:20.904479   22692 main.go:141] libmachine: (offline-docker-052000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:22.906499   22692 client.go:171] duration metric: took 1m0.825654088s to LocalClient.Create
	I1105 10:44:24.907581   22692 start.go:128] duration metric: took 1m2.860896283s to createHost
	I1105 10:44:24.907609   22692 start.go:83] releasing machines lock for "offline-docker-052000", held for 1m2.861004834s
	W1105 10:44:24.907680   22692 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-052000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:7c:e6:ed:e4:80
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-052000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:7c:e6:ed:e4:80
	I1105 10:44:24.970818   22692 out.go:201] 
	W1105 10:44:24.992078   22692 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:7c:e6:ed:e4:80
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:7c:e6:ed:e4:80
	W1105 10:44:24.992094   22692 out.go:270] * 
	* 
	W1105 10:44:24.992741   22692 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:44:25.054915   22692 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-052000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-11-05 10:44:25.175777 -0800 PST m=+3841.728030938
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-052000 -n offline-docker-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-052000 -n offline-docker-052000: exit status 7 (98.180871ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:44:25.271667   22906 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:44:25.271688   22906 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-052000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-052000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-052000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-052000: (5.272986526s)
--- FAIL: TestOffline (195.14s)

                                                
                                    
x
+
TestCertOptions (251.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-316000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E1105 10:50:59.153810   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:51:26.867554   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:51:31.297515   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-316000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m5.995961282s)

                                                
                                                
-- stdout --
	* [cert-options-316000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-316000" primary control-plane node in "cert-options-316000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-316000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for be:3b:e3:88:be:a4
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-316000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:fe:19:9c:9d:5d
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:fe:19:9c:9d:5d
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-316000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-316000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-316000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (180.737567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-316000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-316000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-316000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-316000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-316000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (180.801936ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-316000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-316000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-316000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-11-05 10:53:52.687681 -0800 PST m=+4409.196428387
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-316000 -n cert-options-316000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-316000 -n cert-options-316000: exit status 7 (99.408353ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:53:52.785127   23148 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:53:52.785146   23148 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-316000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-316000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-316000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-316000: (5.256868352s)
--- FAIL: TestCertOptions (251.76s)

                                                
                                    
x
+
TestCertExpiration (1718.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1105 10:48:43.021649   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:49:34.166046   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.611184922s)

                                                
                                                
-- stdout --
	* [cert-expiration-488000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-488000" primary control-plane node in "cert-expiration-488000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-488000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:3d:08:91:fe:91
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-488000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:b4:21:6c:67:9d
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4a:b4:21:6c:67:9d
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1105 10:55:59.160251   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:56:31.306450   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m26.931549749s)

                                                
                                                
-- stdout --
	* [cert-expiration-488000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-488000" primary control-plane node in "cert-expiration-488000" cluster
	* Updating the running hyperkit "cert-expiration-488000" VM ...
	* Updating the running hyperkit "cert-expiration-488000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-488000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-488000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-488000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-488000" primary control-plane node in "cert-expiration-488000" cluster
	* Updating the running hyperkit "cert-expiration-488000" VM ...
	* Updating the running hyperkit "cert-expiration-488000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-488000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-11-05 11:17:16.338163 -0800 PST m=+5812.759715666
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-488000 -n cert-expiration-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-488000 -n cert-expiration-488000: exit status 7 (98.039632ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 11:17:16.433629   24647 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 11:17:16.433654   24647 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-488000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-488000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-488000: (5.267915241s)
--- FAIL: TestCertExpiration (1718.91s)

                                                
                                    
x
+
TestDockerFlags (252.32s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-536000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E1105 10:45:59.146653   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.154315   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.167706   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.191075   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.233317   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.315486   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.477543   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:45:59.799554   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:00.441526   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:01.723219   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:04.285989   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:09.408792   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:19.651017   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:31.291798   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:46:40.132981   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:47:21.097149   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-536000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.477912872s)

                                                
                                                
-- stdout --
	* [docker-flags-536000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-536000" primary control-plane node in "docker-flags-536000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-536000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:45:34.015470   22955 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:45:34.016216   22955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:45:34.016225   22955 out.go:358] Setting ErrFile to fd 2...
	I1105 10:45:34.016231   22955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:45:34.016875   22955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:45:34.018716   22955 out.go:352] Setting JSON to false
	I1105 10:45:34.047186   22955 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9903,"bootTime":1730822431,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:45:34.047358   22955 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:45:34.068955   22955 out.go:177] * [docker-flags-536000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:45:34.111705   22955 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:45:34.111724   22955 notify.go:220] Checking for updates...
	I1105 10:45:34.153660   22955 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:45:34.174716   22955 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:45:34.195500   22955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:45:34.216648   22955 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:45:34.237665   22955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:45:34.258981   22955 config.go:182] Loaded profile config "force-systemd-flag-892000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:45:34.259067   22955 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:45:34.290804   22955 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:45:34.332567   22955 start.go:297] selected driver: hyperkit
	I1105 10:45:34.332592   22955 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:45:34.332604   22955 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:45:34.338233   22955 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:45:34.338390   22955 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:45:34.349746   22955 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:45:34.356182   22955 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:45:34.356204   22955 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:45:34.356238   22955 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:45:34.356484   22955 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1105 10:45:34.356516   22955 cni.go:84] Creating CNI manager for ""
	I1105 10:45:34.356561   22955 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 10:45:34.356567   22955 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 10:45:34.356639   22955 start.go:340] cluster config:
	{Name:docker-flags-536000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-536000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:45:34.356730   22955 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:45:34.398662   22955 out.go:177] * Starting "docker-flags-536000" primary control-plane node in "docker-flags-536000" cluster
	I1105 10:45:34.419510   22955 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:45:34.419554   22955 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:45:34.419566   22955 cache.go:56] Caching tarball of preloaded images
	I1105 10:45:34.419698   22955 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:45:34.419707   22955 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:45:34.419787   22955 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/docker-flags-536000/config.json ...
	I1105 10:45:34.419806   22955 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/docker-flags-536000/config.json: {Name:mk2260372e19e54898b576c74c1f0540c0596a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:45:34.420150   22955 start.go:360] acquireMachinesLock for docker-flags-536000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:46:31.161487   22955 start.go:364] duration metric: took 56.739653179s to acquireMachinesLock for "docker-flags-536000"
	I1105 10:46:31.161527   22955 start.go:93] Provisioning new machine with config: &{Name:docker-flags-536000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-536000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:46:31.161598   22955 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:46:31.183050   22955 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:46:31.183200   22955 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:46:31.183237   22955 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:46:31.194461   22955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60825
	I1105 10:46:31.194781   22955 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:46:31.195203   22955 main.go:141] libmachine: Using API Version  1
	I1105 10:46:31.195212   22955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:46:31.195476   22955 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:46:31.195579   22955 main.go:141] libmachine: (docker-flags-536000) Calling .GetMachineName
	I1105 10:46:31.195665   22955 main.go:141] libmachine: (docker-flags-536000) Calling .DriverName
	I1105 10:46:31.195765   22955 start.go:159] libmachine.API.Create for "docker-flags-536000" (driver="hyperkit")
	I1105 10:46:31.195790   22955 client.go:168] LocalClient.Create starting
	I1105 10:46:31.195825   22955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:46:31.195890   22955 main.go:141] libmachine: Decoding PEM data...
	I1105 10:46:31.195909   22955 main.go:141] libmachine: Parsing certificate...
	I1105 10:46:31.195970   22955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:46:31.196017   22955 main.go:141] libmachine: Decoding PEM data...
	I1105 10:46:31.196027   22955 main.go:141] libmachine: Parsing certificate...
	I1105 10:46:31.196039   22955 main.go:141] libmachine: Running pre-create checks...
	I1105 10:46:31.196046   22955 main.go:141] libmachine: (docker-flags-536000) Calling .PreCreateCheck
	I1105 10:46:31.196119   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.196331   22955 main.go:141] libmachine: (docker-flags-536000) Calling .GetConfigRaw
	I1105 10:46:31.250814   22955 main.go:141] libmachine: Creating machine...
	I1105 10:46:31.250835   22955 main.go:141] libmachine: (docker-flags-536000) Calling .Create
	I1105 10:46:31.250938   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.251119   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:46:31.250924   22978 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:46:31.251203   22955 main.go:141] libmachine: (docker-flags-536000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:46:31.441316   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:46:31.441207   22978 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/id_rsa...
	I1105 10:46:31.507354   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:46:31.507261   22978 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk...
	I1105 10:46:31.507363   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Writing magic tar header
	I1105 10:46:31.507371   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Writing SSH key tar header
	I1105 10:46:31.507956   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:46:31.507916   22978 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000 ...
	I1105 10:46:31.891805   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.891831   22955 main.go:141] libmachine: (docker-flags-536000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid
	I1105 10:46:31.891840   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Using UUID e009ea15-aeb9-4f91-8c54-0365b17ee080
	I1105 10:46:31.917924   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Generated MAC 8e:51:ce:2c:75:94
	I1105 10:46:31.917944   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000
	I1105 10:46:31.917977   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e009ea15-aeb9-4f91-8c54-0365b17ee080", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1105 10:46:31.918002   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e009ea15-aeb9-4f91-8c54-0365b17ee080", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1105 10:46:31.918052   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e009ea15-aeb9-4f91-8c54-0365b17ee080", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage,/Users/jen
kins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000"}
	I1105 10:46:31.918100   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e009ea15-aeb9-4f91-8c54-0365b17ee080 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docke
r-flags-536000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000"
	I1105 10:46:31.918133   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:46:31.921041   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 DEBUG: hyperkit: Pid is 22979
	I1105 10:46:31.921627   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 0
	I1105 10:46:31.921644   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.921707   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:31.922879   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:31.923057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:31.923088   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:31.923138   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:31.923166   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:31.923177   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:31.923183   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:31.923194   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:31.923200   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:31.923270   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:31.923304   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:31.923323   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:31.923337   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:31.923351   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:31.923371   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:31.923384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:31.923394   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:31.923403   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:31.923419   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:31.923430   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:31.923446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:31.931977   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:46:31.940699   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:46:31.941770   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:46:31.941797   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:46:31.941809   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:46:31.941822   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:46:32.324911   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:46:32.324926   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:46:32.439597   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:46:32.439618   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:46:32.439629   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:46:32.439646   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:46:32.440491   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:46:32.440502   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:32 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:46:33.924351   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 1
	I1105 10:46:33.924367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:33.924409   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:33.925384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:33.925473   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:33.925484   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:33.925496   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:33.925503   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:33.925514   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:33.925521   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:33.925527   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:33.925536   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:33.925551   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:33.925563   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:33.925580   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:33.925591   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:33.925598   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:33.925606   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:33.925620   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:33.925635   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:33.925643   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:33.925650   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:33.925658   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:33.925665   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:35.926882   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 2
	I1105 10:46:35.926897   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:35.926990   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:35.927938   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:35.928031   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:35.928041   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:35.928050   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:35.928055   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:35.928063   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:35.928069   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:35.928084   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:35.928094   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:35.928102   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:35.928108   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:35.928114   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:35.928120   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:35.928126   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:35.928135   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:35.928143   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:35.928152   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:35.928159   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:35.928166   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:35.928188   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:35.928198   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:37.787480   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:46:37.787554   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:46:37.787563   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:46:37.807315   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:46:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:46:37.930427   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 3
	I1105 10:46:37.930450   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:37.930640   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:37.932382   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:37.932566   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:37.932580   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:37.932592   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:37.932602   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:37.932622   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:37.932636   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:37.932646   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:37.932669   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:37.932693   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:37.932710   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:37.932721   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:37.932732   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:37.932745   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:37.932755   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:37.932764   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:37.932773   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:37.932788   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:37.932803   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:37.932814   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:37.932826   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:39.932838   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 4
	I1105 10:46:39.932855   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:39.932915   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:39.933893   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:39.933991   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:39.933999   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:39.934009   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:39.934017   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:39.934026   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:39.934035   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:39.934043   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:39.934049   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:39.934066   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:39.934078   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:39.934086   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:39.934094   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:39.934101   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:39.934108   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:39.934119   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:39.934127   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:39.934134   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:39.934143   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:39.934150   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:39.934163   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:41.936265   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 5
	I1105 10:46:41.936283   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:41.936317   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:41.937306   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:41.937367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:41.937377   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:41.937384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:41.937390   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:41.937408   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:41.937415   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:41.937437   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:41.937448   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:41.937455   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:41.937464   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:41.937480   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:41.937488   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:41.937495   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:41.937504   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:41.937513   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:41.937521   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:41.937528   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:41.937535   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:41.937542   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:41.937547   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:43.938466   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 6
	I1105 10:46:43.938482   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:43.938543   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:43.939500   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:43.939587   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:43.939597   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:43.939609   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:43.939617   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:43.939623   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:43.939629   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:43.939645   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:43.939667   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:43.939677   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:43.939687   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:43.939701   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:43.939713   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:43.939728   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:43.939736   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:43.939751   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:43.939764   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:43.939784   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:43.939795   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:43.939814   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:43.939822   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:45.941854   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 7
	I1105 10:46:45.941868   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:45.941915   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:45.942840   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:45.942927   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:45.942937   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:45.942960   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:45.942970   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:45.942992   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:45.943000   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:45.943007   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:45.943013   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:45.943020   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:45.943033   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:45.943042   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:45.943048   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:45.943055   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:45.943062   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:45.943068   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:45.943074   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:45.943087   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:45.943099   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:45.943120   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:45.943132   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:47.943175   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 8
	I1105 10:46:47.943188   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:47.943244   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:47.944215   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:47.944280   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:47.944291   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:47.944300   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:47.944306   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:47.944323   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:47.944330   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:47.944337   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:47.944343   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:47.944351   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:47.944357   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:47.944371   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:47.944384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:47.944392   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:47.944400   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:47.944409   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:47.944417   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:47.944424   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:47.944431   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:47.944445   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:47.944453   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:49.945173   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 9
	I1105 10:46:49.945185   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:49.945194   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:49.946168   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:49.946217   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:49.946227   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:49.946243   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:49.946258   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:49.946269   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:49.946275   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:49.946286   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:49.946301   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:49.946310   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:49.946318   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:49.946326   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:49.946332   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:49.946337   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:49.946343   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:49.946349   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:49.946359   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:49.946367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:49.946379   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:49.946391   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:49.946399   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:51.948469   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 10
	I1105 10:46:51.948483   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:51.948508   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:51.949460   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:51.949524   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:51.949534   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:51.949542   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:51.949547   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:51.949561   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:51.949575   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:51.949583   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:51.949589   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:51.949606   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:51.949616   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:51.949629   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:51.949638   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:51.949646   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:51.949654   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:51.949661   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:51.949669   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:51.949675   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:51.949683   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:51.949690   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:51.949698   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:53.950022   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 11
	I1105 10:46:53.950037   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:53.950091   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:53.951075   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:53.951154   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:53.951163   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:53.951177   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:53.951183   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:53.951189   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:53.951195   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:53.951201   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:53.951206   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:53.951219   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:53.951225   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:53.951232   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:53.951241   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:53.951255   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:53.951267   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:53.951274   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:53.951280   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:53.951287   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:53.951295   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:53.951315   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:53.951327   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:55.951940   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 12
	I1105 10:46:55.951954   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:55.952000   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:55.952958   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:55.953057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:55.953071   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:55.953077   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:55.953096   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:55.953118   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:55.953136   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:55.953152   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:55.953169   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:55.953187   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:55.953206   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:55.953223   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:55.953240   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:55.953257   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:55.953274   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:55.953292   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:55.953311   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:55.953323   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:55.953331   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:55.953339   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:55.953349   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:57.955354   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 13
	I1105 10:46:57.955370   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:57.955412   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:57.956328   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:57.956416   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:57.956425   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:57.956433   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:57.956442   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:57.956451   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:57.956458   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:57.956470   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:57.956494   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:57.956506   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:57.956518   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:57.956526   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:57.956533   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:57.956540   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:57.956556   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:57.956564   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:57.956571   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:57.956579   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:57.956585   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:57.956593   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:57.956601   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:59.957003   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 14
	I1105 10:46:59.957018   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:59.957062   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:46:59.958029   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:46:59.958084   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:59.958092   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:59.958100   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:59.958105   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:59.958124   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:59.958141   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:59.958149   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:59.958158   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:59.958170   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:59.958182   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:59.958192   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:59.958201   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:59.958208   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:59.958223   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:59.958237   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:59.958250   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:59.958257   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:59.958263   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:59.958279   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:59.958291   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:01.959661   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 15
	I1105 10:47:01.959676   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:01.959704   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:01.960661   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:01.960729   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:01.960739   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:01.960748   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:01.960753   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:01.960759   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:01.960764   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:01.960770   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:01.960776   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:01.960783   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:01.960790   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:01.960796   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:01.960805   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:01.960812   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:01.960820   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:01.960830   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:01.960838   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:01.960845   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:01.960853   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:01.960859   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:01.960866   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:03.962934   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 16
	I1105 10:47:03.962949   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:03.963022   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:03.963959   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:03.964041   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:03.964051   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:03.964059   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:03.964065   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:03.964073   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:03.964079   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:03.964103   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:03.964123   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:03.964135   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:03.964144   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:03.964155   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:03.964164   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:03.964172   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:03.964179   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:03.964186   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:03.964192   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:03.964207   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:03.964217   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:03.964224   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:03.964237   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:05.964843   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 17
	I1105 10:47:05.964861   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:05.964928   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:05.965875   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:05.965939   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:05.965947   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:05.965955   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:05.965963   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:05.965972   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:05.965991   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:05.966003   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:05.966016   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:05.966027   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:05.966036   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:05.966046   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:05.966054   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:05.966060   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:05.966066   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:05.966080   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:05.966098   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:05.966109   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:05.966117   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:05.966128   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:05.966138   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:07.968227   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 18
	I1105 10:47:07.968245   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:07.968278   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:07.969225   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:07.969309   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:07.969318   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:07.969328   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:07.969334   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:07.969341   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:07.969347   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:07.969365   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:07.969377   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:07.969397   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:07.969403   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:07.969413   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:07.969421   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:07.969429   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:07.969435   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:07.969449   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:07.969460   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:07.969467   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:07.969475   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:07.969483   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:07.969490   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:09.970108   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 19
	I1105 10:47:09.970131   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:09.970215   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:09.971141   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:09.971237   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:09.971245   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:09.971255   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:09.971269   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:09.971277   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:09.971287   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:09.971295   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:09.971302   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:09.971308   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:09.971315   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:09.971327   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:09.971339   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:09.971357   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:09.971369   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:09.971384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:09.971397   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:09.971406   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:09.971413   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:09.971420   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:09.971428   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:11.973485   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 20
	I1105 10:47:11.973498   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:11.973531   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:11.974529   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:11.974607   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:11.974615   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:11.974626   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:11.974634   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:11.974641   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:11.974648   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:11.974665   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:11.974674   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:11.974685   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:11.974692   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:11.974699   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:11.974719   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:11.974726   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:11.974733   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:11.974741   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:11.974754   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:11.974766   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:11.974773   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:11.974781   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:11.974789   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:13.975867   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 21
	I1105 10:47:13.975880   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:13.975945   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:13.976917   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:13.976985   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:13.976993   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:13.977002   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:13.977011   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:13.977017   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:13.977022   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:13.977031   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:13.977048   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:13.977055   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:13.977070   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:13.977082   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:13.977098   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:13.977112   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:13.977124   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:13.977132   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:13.977139   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:13.977146   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:13.977154   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:13.977161   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:13.977184   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:15.977220   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 22
	I1105 10:47:15.977233   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:15.977298   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:15.978303   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:15.978333   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:15.978350   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:15.978370   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:15.978381   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:15.978406   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:15.978416   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:15.978424   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:15.978432   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:15.978446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:15.978457   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:15.978465   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:15.978473   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:15.978489   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:15.978503   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:15.978511   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:15.978518   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:15.978530   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:15.978538   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:15.978543   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:15.978559   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:17.978817   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 23
	I1105 10:47:17.978833   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:17.978891   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:17.979833   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:17.979927   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:17.979938   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:17.979946   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:17.979951   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:17.979957   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:17.979963   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:17.979982   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:17.979996   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:17.980006   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:17.980011   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:17.980017   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:17.980026   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:17.980043   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:17.980056   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:17.980073   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:17.980083   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:17.980090   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:17.980097   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:17.980116   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:17.980128   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:19.981976   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 24
	I1105 10:47:19.981992   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:19.982039   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:19.982965   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:19.983063   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:19.983077   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:19.983083   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:19.983089   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:19.983094   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:19.983108   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:19.983122   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:19.983131   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:19.983139   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:19.983151   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:19.983161   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:19.983169   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:19.983177   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:19.983184   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:19.983191   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:19.983198   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:19.983205   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:19.983212   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:19.983218   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:19.983226   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:21.983861   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 25
	I1105 10:47:21.983875   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:21.983921   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:21.984858   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:21.984955   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:21.984965   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:21.984973   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:21.984978   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:21.984984   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:21.984990   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:21.984995   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:21.985001   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:21.985017   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:21.985025   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:21.985042   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:21.985063   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:21.985089   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:21.985102   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:21.985114   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:21.985121   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:21.985127   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:21.985133   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:21.985141   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:21.985149   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:23.987199   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 26
	I1105 10:47:23.987211   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:23.987221   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:23.988187   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:23.988271   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:23.988280   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:23.988310   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:23.988318   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:23.988335   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:23.988344   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:23.988350   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:23.988357   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:23.988364   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:23.988377   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:23.988391   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:23.988398   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:23.988405   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:23.988413   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:23.988426   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:23.988439   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:23.988446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:23.988453   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:23.988461   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:23.988473   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:25.989343   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 27
	I1105 10:47:25.989357   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:25.989401   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:25.990359   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:25.990451   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:25.990459   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:25.990467   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:25.990477   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:25.990488   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:25.990507   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:25.990518   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:25.990527   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:25.990546   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:25.990557   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:25.990564   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:25.990572   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:25.990584   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:25.990593   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:25.990600   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:25.990607   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:25.990617   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:25.990626   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:25.990652   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:25.990662   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:27.992681   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 28
	I1105 10:47:27.992696   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:27.992762   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:27.993750   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:27.993823   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:27.993834   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:27.993845   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:27.993851   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:27.993858   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:27.993863   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:27.993879   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:27.993891   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:27.993907   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:27.993920   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:27.993929   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:27.993937   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:27.993945   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:27.993951   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:27.993958   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:27.993966   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:27.993973   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:27.993980   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:27.993997   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:27.994005   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:29.994159   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 29
	I1105 10:47:29.994173   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:29.994254   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:29.995214   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for 8e:51:ce:2c:75:94 in /var/db/dhcpd_leases ...
	I1105 10:47:29.995307   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:29.995330   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:29.995337   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:29.995345   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:29.995355   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:29.995369   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:29.995380   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:29.995392   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:29.995401   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:29.995409   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:29.995416   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:29.995424   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:29.995432   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:29.995439   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:29.995446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:29.995455   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:29.995460   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:29.995466   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:29.995473   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:29.995481   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:31.996587   22955 client.go:171] duration metric: took 1m0.799353629s to LocalClient.Create
	I1105 10:47:33.998711   22955 start.go:128] duration metric: took 1m2.835628767s to createHost
	I1105 10:47:33.998738   22955 start.go:83] releasing machines lock for "docker-flags-536000", held for 1m2.835764886s
	W1105 10:47:33.998752   22955 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:51:ce:2c:75:94
	I1105 10:47:33.999121   22955 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:47:33.999147   22955 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:47:34.010332   22955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60827
	I1105 10:47:34.010667   22955 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:47:34.011003   22955 main.go:141] libmachine: Using API Version  1
	I1105 10:47:34.011026   22955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:47:34.011279   22955 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:47:34.011665   22955 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:47:34.011697   22955 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:47:34.023015   22955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60829
	I1105 10:47:34.023448   22955 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:47:34.023853   22955 main.go:141] libmachine: Using API Version  1
	I1105 10:47:34.023867   22955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:47:34.024137   22955 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:47:34.024282   22955 main.go:141] libmachine: (docker-flags-536000) Calling .GetState
	I1105 10:47:34.024371   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.024453   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:34.025612   22955 main.go:141] libmachine: (docker-flags-536000) Calling .DriverName
	I1105 10:47:34.120196   22955 out.go:177] * Deleting "docker-flags-536000" in hyperkit ...
	I1105 10:47:34.141447   22955 main.go:141] libmachine: (docker-flags-536000) Calling .Remove
	I1105 10:47:34.141594   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.141610   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.141660   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:34.142839   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.142904   22955 main.go:141] libmachine: (docker-flags-536000) DBG | waiting for graceful shutdown
	I1105 10:47:35.143383   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:35.143462   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:35.144623   22955 main.go:141] libmachine: (docker-flags-536000) DBG | waiting for graceful shutdown
	I1105 10:47:36.144978   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:36.145070   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:36.146396   22955 main.go:141] libmachine: (docker-flags-536000) DBG | waiting for graceful shutdown
	I1105 10:47:37.148503   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:37.148560   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:37.149266   22955 main.go:141] libmachine: (docker-flags-536000) DBG | waiting for graceful shutdown
	I1105 10:47:38.150090   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:38.150166   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:38.151384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | waiting for graceful shutdown
	I1105 10:47:39.151859   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:39.151935   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 22979
	I1105 10:47:39.152807   22955 main.go:141] libmachine: (docker-flags-536000) DBG | sending sigkill
	I1105 10:47:39.152816   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:39.166025   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:47:39 WARN : hyperkit: failed to read stdout: EOF
	I1105 10:47:39.166045   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:47:39 WARN : hyperkit: failed to read stderr: EOF
	W1105 10:47:39.184054   22955 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:51:ce:2c:75:94
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:51:ce:2c:75:94
	I1105 10:47:39.184076   22955 start.go:729] Will try again in 5 seconds ...
	I1105 10:47:44.185340   22955 start.go:360] acquireMachinesLock for docker-flags-536000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:48:36.877599   22955 start.go:364] duration metric: took 52.690999675s to acquireMachinesLock for "docker-flags-536000"
	I1105 10:48:36.877640   22955 start.go:93] Provisioning new machine with config: &{Name:docker-flags-536000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-536000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:48:36.877705   22955 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:48:36.919860   22955 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:48:36.919935   22955 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:48:36.919991   22955 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:48:36.931393   22955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60833
	I1105 10:48:36.931764   22955 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:48:36.932168   22955 main.go:141] libmachine: Using API Version  1
	I1105 10:48:36.932181   22955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:48:36.932424   22955 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:48:36.932525   22955 main.go:141] libmachine: (docker-flags-536000) Calling .GetMachineName
	I1105 10:48:36.932652   22955 main.go:141] libmachine: (docker-flags-536000) Calling .DriverName
	I1105 10:48:36.932778   22955 start.go:159] libmachine.API.Create for "docker-flags-536000" (driver="hyperkit")
	I1105 10:48:36.932800   22955 client.go:168] LocalClient.Create starting
	I1105 10:48:36.932824   22955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:48:36.932887   22955 main.go:141] libmachine: Decoding PEM data...
	I1105 10:48:36.932900   22955 main.go:141] libmachine: Parsing certificate...
	I1105 10:48:36.932949   22955 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:48:36.932996   22955 main.go:141] libmachine: Decoding PEM data...
	I1105 10:48:36.933008   22955 main.go:141] libmachine: Parsing certificate...
	I1105 10:48:36.933020   22955 main.go:141] libmachine: Running pre-create checks...
	I1105 10:48:36.933026   22955 main.go:141] libmachine: (docker-flags-536000) Calling .PreCreateCheck
	I1105 10:48:36.933110   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:36.933134   22955 main.go:141] libmachine: (docker-flags-536000) Calling .GetConfigRaw
	I1105 10:48:36.940964   22955 main.go:141] libmachine: Creating machine...
	I1105 10:48:36.941007   22955 main.go:141] libmachine: (docker-flags-536000) Calling .Create
	I1105 10:48:36.941094   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:36.941278   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:48:36.941091   23008 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:48:36.941345   22955 main.go:141] libmachine: (docker-flags-536000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:48:37.362556   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:48:37.362445   23008 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/id_rsa...
	I1105 10:48:37.773132   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:48:37.773052   23008 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk...
	I1105 10:48:37.773150   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Writing magic tar header
	I1105 10:48:37.773164   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Writing SSH key tar header
	I1105 10:48:37.773731   22955 main.go:141] libmachine: (docker-flags-536000) DBG | I1105 10:48:37.773689   23008 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000 ...
	I1105 10:48:38.155826   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:38.155844   22955 main.go:141] libmachine: (docker-flags-536000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid
	I1105 10:48:38.155892   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Using UUID 2c75ee85-4022-4b86-937c-f1f5975f2530
	I1105 10:48:38.182406   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Generated MAC fe:36:00:6d:f8:41
	I1105 10:48:38.182424   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000
	I1105 10:48:38.182463   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c75ee85-4022-4b86-937c-f1f5975f2530", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1105 10:48:38.182496   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c75ee85-4022-4b86-937c-f1f5975f2530", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"",
process:(*os.Process)(nil)}
	I1105 10:48:38.182534   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c75ee85-4022-4b86-937c-f1f5975f2530", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage,/Users/jen
kins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000"}
	I1105 10:48:38.182578   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c75ee85-4022-4b86-937c-f1f5975f2530 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/docker-flags-536000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docke
r-flags-536000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-536000"
	I1105 10:48:38.182597   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:48:38.185662   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 DEBUG: hyperkit: Pid is 23022
	I1105 10:48:38.186216   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 0
	I1105 10:48:38.186242   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:38.186277   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:38.187468   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:38.187586   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:38.187600   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:38.187614   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:38.187641   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:38.187656   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:38.187685   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:38.187699   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:38.187721   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:38.187734   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:38.187748   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:38.187763   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:38.187775   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:38.187789   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:38.187809   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:38.187826   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:38.187833   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:38.187840   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:38.187849   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:38.187860   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:38.187882   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:38.195997   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:48:38.204868   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/docker-flags-536000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:48:38.205802   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:48:38.205826   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:48:38.205837   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:48:38.205852   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:48:38.593558   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:48:38.593574   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:48:38.708227   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:48:38.708262   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:48:38.708278   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:48:38.708297   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:48:38.709093   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:48:38.709103   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:48:40.189174   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 1
	I1105 10:48:40.189190   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:40.189239   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:40.190278   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:40.190335   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:40.190351   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:40.190361   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:40.190368   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:40.190374   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:40.190380   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:40.190401   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:40.190415   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:40.190422   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:40.190429   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:40.190442   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:40.190454   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:40.190463   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:40.190472   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:40.190479   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:40.190486   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:40.190500   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:40.190512   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:40.190519   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:40.190528   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:42.190790   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 2
	I1105 10:48:42.190806   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:42.190860   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:42.191830   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:42.191935   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:42.191943   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:42.191952   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:42.191958   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:42.191964   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:42.191969   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:42.191975   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:42.192008   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:42.192026   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:42.192035   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:42.192042   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:42.192057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:42.192065   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:42.192072   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:42.192079   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:42.192085   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:42.192091   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:42.192104   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:42.192116   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:42.192126   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:44.082365   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:48:44.082455   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:48:44.082464   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:48:44.101856   22955 main.go:141] libmachine: (docker-flags-536000) DBG | 2024/11/05 10:48:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:48:44.193370   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 3
	I1105 10:48:44.193396   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:44.193650   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:44.195392   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:44.195628   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:44.195641   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:44.195650   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:44.195658   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:44.195666   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:44.195677   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:44.195696   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:44.195710   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:44.195727   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:44.195738   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:44.195749   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:44.195759   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:44.195797   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:44.195814   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:44.195825   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:44.195836   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:44.195846   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:44.195856   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:44.195866   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:44.195874   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:46.196576   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 4
	I1105 10:48:46.196591   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:46.196695   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:46.197703   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:46.197796   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:46.197817   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:46.197855   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:46.197872   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:46.197889   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:46.197899   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:46.197913   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:46.197924   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:46.197932   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:46.197943   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:46.197950   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:46.197959   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:46.197966   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:46.197972   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:46.197996   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:46.198004   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:46.198014   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:46.198021   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:46.198026   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:46.198039   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:48.198451   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 5
	I1105 10:48:48.198467   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:48.198528   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:48.199747   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:48.199828   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:48.199835   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:48.199873   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:48.199887   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:48.199898   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:48.199904   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:48.199911   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:48.199920   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:48.199929   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:48.199937   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:48.199943   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:48.199950   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:48.199956   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:48.199962   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:48.199970   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:48.199986   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:48.199994   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:48.200011   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:48.200019   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:48.200027   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:50.202100   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 6
	I1105 10:48:50.202112   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:50.202167   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:50.203233   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:50.203289   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:50.203302   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:50.203323   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:50.203335   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:50.203355   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:50.203367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:50.203382   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:50.203390   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:50.203397   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:50.203406   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:50.203417   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:50.203427   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:50.203436   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:50.203446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:50.203468   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:50.203480   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:50.203488   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:50.203495   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:50.203502   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:50.203511   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:52.205478   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 7
	I1105 10:48:52.205493   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:52.205542   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:52.206545   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:52.206591   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:52.206599   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:52.206609   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:52.206618   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:52.206629   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:52.206637   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:52.206643   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:52.206654   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:52.206684   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:52.206693   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:52.206699   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:52.206705   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:52.206722   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:52.206735   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:52.206742   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:52.206751   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:52.206761   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:52.206770   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:52.206777   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:52.206786   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:54.208367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 8
	I1105 10:48:54.208382   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:54.208431   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:54.209385   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:54.209529   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:54.209541   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:54.209550   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:54.209557   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:54.209565   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:54.209574   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:54.209588   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:54.209605   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:54.209619   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:54.209641   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:54.209659   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:54.209670   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:54.209692   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:54.209725   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:54.209732   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:54.209740   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:54.209759   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:54.209773   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:54.209781   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:54.209788   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:56.210048   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 9
	I1105 10:48:56.210064   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:56.210136   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:56.211159   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:56.211276   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:56.211287   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:56.211296   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:56.211301   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:56.211307   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:56.211312   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:56.211318   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:56.211323   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:56.211329   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:56.211338   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:56.211359   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:56.211373   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:56.211388   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:56.211397   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:56.211404   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:56.211421   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:56.211437   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:56.211450   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:56.211456   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:56.211464   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:58.211717   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 10
	I1105 10:48:58.211733   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:58.211829   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:48:58.212811   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:48:58.212897   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:58.212905   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:58.212913   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:58.212919   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:58.212925   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:58.212931   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:58.212946   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:58.212959   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:58.212968   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:58.212977   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:58.212993   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:58.213002   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:58.213011   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:58.213018   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:58.213025   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:58.213035   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:58.213043   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:58.213050   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:58.213057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:58.213063   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:00.213301   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 11
	I1105 10:49:00.213317   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:00.213370   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:00.214316   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:00.214413   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:00.214422   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:00.214434   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:00.214444   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:00.214458   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:00.214472   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:00.214490   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:00.214499   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:00.214507   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:00.214523   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:00.214535   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:00.214543   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:00.214551   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:00.214558   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:00.214566   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:00.214573   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:00.214579   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:00.214595   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:00.214608   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:00.214659   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:02.215057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 12
	I1105 10:49:02.215070   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:02.215127   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:02.216060   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:02.216158   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:02.216168   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:02.216175   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:02.216180   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:02.216188   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:02.216195   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:02.216202   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:02.216211   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:02.216231   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:02.216242   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:02.216259   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:02.216274   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:02.216284   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:02.216290   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:02.216314   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:02.216322   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:02.216330   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:02.216337   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:02.216351   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:02.216363   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:04.218401   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 13
	I1105 10:49:04.218415   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:04.218489   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:04.219457   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:04.219540   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:04.219549   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:04.219566   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:04.219592   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:04.219612   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:04.219624   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:04.219631   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:04.219639   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:04.219645   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:04.219658   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:04.219672   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:04.219683   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:04.219694   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:04.219703   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:04.219715   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:04.219725   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:04.219733   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:04.219741   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:04.219751   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:04.219759   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:06.219897   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 14
	I1105 10:49:06.219910   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:06.219987   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:06.220954   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:06.221030   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:06.221040   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:06.221048   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:06.221053   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:06.221059   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:06.221064   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:06.221070   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:06.221076   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:06.221081   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:06.221086   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:06.221100   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:06.221112   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:06.221124   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:06.221132   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:06.221139   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:06.221147   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:06.221153   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:06.221161   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:06.221167   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:06.221174   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:08.221363   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 15
	I1105 10:49:08.221377   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:08.221432   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:08.222387   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:08.222473   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:08.222483   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:08.222492   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:08.222501   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:08.222507   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:08.222512   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:08.222519   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:08.222524   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:08.222537   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:08.222545   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:08.222553   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:08.222568   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:08.222581   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:08.222594   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:08.222606   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:08.222616   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:08.222623   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:08.222629   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:08.222637   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:08.222645   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:10.223230   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 16
	I1105 10:49:10.223247   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:10.223327   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:10.224274   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:10.224359   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:10.224368   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:10.224376   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:10.224384   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:10.224397   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:10.224412   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:10.224420   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:10.224433   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:10.224457   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:10.224468   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:10.224489   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:10.224500   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:10.224510   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:10.224516   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:10.224524   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:10.224531   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:10.224538   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:10.224552   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:10.224563   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:10.224572   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:12.226426   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 17
	I1105 10:49:12.226439   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:12.226509   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:12.227451   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:12.227556   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:12.227566   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:12.227575   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:12.227580   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:12.227603   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:12.227621   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:12.227630   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:12.227637   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:12.227645   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:12.227651   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:12.227659   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:12.227667   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:12.227675   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:12.227681   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:12.227687   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:12.227708   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:12.227719   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:12.227727   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:12.227734   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:12.227749   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:14.229788   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 18
	I1105 10:49:14.229800   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:14.229828   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:14.230785   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:14.230869   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:14.230886   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:14.230895   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:14.230901   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:14.230926   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:14.230945   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:14.230953   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:14.230962   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:14.230968   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:14.230976   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:14.230985   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:14.230992   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:14.231000   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:14.231005   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:14.231011   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:14.231018   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:14.231033   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:14.231047   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:14.231054   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:14.231059   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:16.231311   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 19
	I1105 10:49:16.231327   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:16.231386   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:16.232367   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:16.232449   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:16.232470   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:16.232488   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:16.232502   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:16.232512   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:16.232519   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:16.232525   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:16.232536   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:16.232552   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:16.232563   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:16.232570   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:16.232576   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:16.232582   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:16.232593   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:16.232598   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:16.232611   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:16.232619   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:16.232626   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:16.232634   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:16.232641   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:18.232701   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 20
	I1105 10:49:18.232723   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:18.232792   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:18.233734   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:18.233837   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:18.233848   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:18.233857   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:18.233871   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:18.233909   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:18.233924   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:18.233933   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:18.233939   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:18.233953   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:18.233965   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:18.233986   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:18.233999   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:18.234007   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:18.234014   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:18.234020   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:18.234028   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:18.234050   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:18.234070   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:18.234078   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:18.234086   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:20.236057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 21
	I1105 10:49:20.236071   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:20.236129   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:20.237080   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:20.237170   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:20.237180   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:20.237190   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:20.237196   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:20.237203   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:20.237208   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:20.237219   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:20.237230   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:20.237236   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:20.237243   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:20.237250   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:20.237256   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:20.237263   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:20.237280   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:20.237292   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:20.237311   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:20.237325   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:20.237335   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:20.237343   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:20.237358   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:22.239358   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 22
	I1105 10:49:22.239372   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:22.239439   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:22.240383   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:22.240485   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:22.240495   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:22.240510   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:22.240516   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:22.240524   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:22.240531   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:22.240538   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:22.240544   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:22.240552   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:22.240559   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:22.240565   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:22.240571   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:22.240577   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:22.240584   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:22.240595   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:22.240603   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:22.240610   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:22.240616   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:22.240622   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:22.240627   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:24.241967   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 23
	I1105 10:49:24.241983   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:24.242035   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:24.243025   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:24.243081   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:24.243092   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:24.243102   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:24.243111   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:24.243119   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:24.243127   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:24.243134   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:24.243140   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:24.243164   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:24.243175   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:24.243183   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:24.243192   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:24.243200   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:24.243207   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:24.243213   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:24.243219   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:24.243228   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:24.243239   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:24.243247   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:24.243255   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:26.244728   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 24
	I1105 10:49:26.244749   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:26.244813   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:26.245791   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:26.245844   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:26.245856   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:26.245866   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:26.245874   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:26.245880   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:26.245898   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:26.245911   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:26.245922   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:26.245931   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:26.245940   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:26.245953   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:26.245961   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:26.245967   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:26.245977   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:26.245984   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:26.245991   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:26.245999   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:26.246008   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:26.246014   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:26.246020   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:28.248091   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 25
	I1105 10:49:28.248107   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:28.248134   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:28.249075   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:28.249164   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:28.249173   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:28.249182   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:28.249187   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:28.249217   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:28.249234   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:28.249260   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:28.249279   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:28.249288   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:28.249293   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:28.249303   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:28.249336   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:28.249350   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:28.249361   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:28.249381   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:28.249394   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:28.249402   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:28.249418   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:28.249425   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:28.249440   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:30.251431   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 26
	I1105 10:49:30.251446   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:30.251482   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:30.252415   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:30.252515   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:30.252526   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:30.252533   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:30.252539   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:30.252546   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:30.252552   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:30.252569   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:30.252578   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:30.252594   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:30.252612   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:30.252620   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:30.252628   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:30.252642   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:30.252654   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:30.252662   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:30.252670   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:30.252677   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:30.252685   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:30.252691   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:30.252698   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:32.253713   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 27
	I1105 10:49:32.253729   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:32.253769   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:32.254721   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:32.254801   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:32.254838   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:32.254846   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:32.254859   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:32.254865   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:32.254871   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:32.254879   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:32.254893   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:32.254904   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:32.254913   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:32.254927   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:32.254934   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:32.254942   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:32.254948   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:32.254958   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:32.254965   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:32.254971   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:32.254979   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:32.254993   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:32.255005   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:34.256410   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 28
	I1105 10:49:34.256424   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:34.256972   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:34.257496   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:34.257607   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:34.257616   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:34.257626   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:34.257636   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:34.257643   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:34.257650   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:34.257703   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:34.257729   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:34.257748   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:34.257763   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:34.257776   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:34.257785   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:34.257804   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:34.257820   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:34.257836   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:34.257848   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:34.257869   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:34.257888   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:34.257903   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:34.257912   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:36.259831   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Attempt 29
	I1105 10:49:36.259850   22955 main.go:141] libmachine: (docker-flags-536000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:49:36.259910   22955 main.go:141] libmachine: (docker-flags-536000) DBG | hyperkit pid from json: 23022
	I1105 10:49:36.260847   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Searching for fe:36:00:6d:f8:41 in /var/db/dhcpd_leases ...
	I1105 10:49:36.260931   22955 main.go:141] libmachine: (docker-flags-536000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:49:36.260942   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:49:36.260950   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:49:36.260955   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:49:36.260962   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:49:36.260967   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:49:36.260973   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:49:36.260982   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:49:36.260988   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:49:36.260994   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:49:36.261002   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:49:36.261008   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:49:36.261014   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:49:36.261021   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:49:36.261029   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:49:36.261042   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:49:36.261057   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:49:36.261066   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:49:36.261071   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:49:36.261078   22955 main.go:141] libmachine: (docker-flags-536000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:49:38.262372   22955 client.go:171] duration metric: took 1m1.328135522s to LocalClient.Create
	I1105 10:49:40.264530   22955 start.go:128] duration metric: took 1m3.385339716s to createHost
	I1105 10:49:40.264547   22955 start.go:83] releasing machines lock for "docker-flags-536000", held for 1m3.38546171s
	W1105 10:49:40.264621   22955 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-536000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:36:00:6d:f8:41
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-536000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:36:00:6d:f8:41
	I1105 10:49:40.286158   22955 out.go:201] 
	W1105 10:49:40.327748   22955 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:36:00:6d:f8:41
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fe:36:00:6d:f8:41
	W1105 10:49:40.327763   22955 out.go:270] * 
	* 
	W1105 10:49:40.328499   22955 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:49:40.389766   22955 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-536000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-536000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-536000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (204.667784ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-536000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-536000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-536000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-536000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (188.547252ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-536000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-536000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-536000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-11-05 10:49:40.906384 -0800 PST m=+4157.420990061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-536000 -n docker-flags-536000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-536000 -n docker-flags-536000: exit status 7 (102.994715ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:49:41.006785   23058 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:49:41.006808   23058 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-536000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-536000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-536000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-536000: (5.266524551s)
--- FAIL: TestDockerFlags (252.32s)

                                                
                                    
x
+
TestForceSystemdFlag (252.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-892000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E1105 10:44:34.141448   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-892000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.485018373s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-892000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-892000" primary control-plane node in "force-systemd-flag-892000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-892000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:44:30.621945   22916 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:44:30.622245   22916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:44:30.622250   22916 out.go:358] Setting ErrFile to fd 2...
	I1105 10:44:30.622254   22916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:44:30.622446   22916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:44:30.624058   22916 out.go:352] Setting JSON to false
	I1105 10:44:30.653121   22916 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9839,"bootTime":1730822431,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:44:30.653303   22916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:44:30.674572   22916 out.go:177] * [force-systemd-flag-892000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:44:30.717634   22916 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:44:30.717691   22916 notify.go:220] Checking for updates...
	I1105 10:44:30.759669   22916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:44:30.780649   22916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:44:30.801468   22916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:44:30.822749   22916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:44:30.843691   22916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:44:30.865003   22916 config.go:182] Loaded profile config "force-systemd-env-817000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:44:30.865096   22916 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:44:30.896682   22916 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:44:30.938495   22916 start.go:297] selected driver: hyperkit
	I1105 10:44:30.938514   22916 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:44:30.938524   22916 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:44:30.943877   22916 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:44:30.944023   22916 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:44:30.955205   22916 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:44:30.961703   22916 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:44:30.961744   22916 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:44:30.961780   22916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:44:30.962017   22916 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 10:44:30.962044   22916 cni.go:84] Creating CNI manager for ""
	I1105 10:44:30.962082   22916 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 10:44:30.962090   22916 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 10:44:30.962156   22916 start.go:340] cluster config:
	{Name:force-systemd-flag-892000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:44:30.962249   22916 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:44:30.982936   22916 out.go:177] * Starting "force-systemd-flag-892000" primary control-plane node in "force-systemd-flag-892000" cluster
	I1105 10:44:31.024796   22916 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:44:31.024838   22916 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:44:31.024854   22916 cache.go:56] Caching tarball of preloaded images
	I1105 10:44:31.024977   22916 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:44:31.024986   22916 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:44:31.025070   22916 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/force-systemd-flag-892000/config.json ...
	I1105 10:44:31.025089   22916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/force-systemd-flag-892000/config.json: {Name:mk9e6dc803206963e10aef889fe89667275793ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:44:31.025448   22916 start.go:360] acquireMachinesLock for force-systemd-flag-892000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:45:28.099352   22916 start.go:364] duration metric: took 57.051909554s to acquireMachinesLock for "force-systemd-flag-892000"
	I1105 10:45:28.099411   22916 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:45:28.099458   22916 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:45:28.140528   22916 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:45:28.140681   22916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:45:28.140739   22916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:45:28.151854   22916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60805
	I1105 10:45:28.152182   22916 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:45:28.152593   22916 main.go:141] libmachine: Using API Version  1
	I1105 10:45:28.152601   22916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:45:28.152878   22916 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:45:28.152989   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .GetMachineName
	I1105 10:45:28.153094   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .DriverName
	I1105 10:45:28.153208   22916 start.go:159] libmachine.API.Create for "force-systemd-flag-892000" (driver="hyperkit")
	I1105 10:45:28.153234   22916 client.go:168] LocalClient.Create starting
	I1105 10:45:28.153272   22916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:45:28.153336   22916 main.go:141] libmachine: Decoding PEM data...
	I1105 10:45:28.153354   22916 main.go:141] libmachine: Parsing certificate...
	I1105 10:45:28.153414   22916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:45:28.153460   22916 main.go:141] libmachine: Decoding PEM data...
	I1105 10:45:28.153471   22916 main.go:141] libmachine: Parsing certificate...
	I1105 10:45:28.153485   22916 main.go:141] libmachine: Running pre-create checks...
	I1105 10:45:28.153494   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .PreCreateCheck
	I1105 10:45:28.153564   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:28.153774   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .GetConfigRaw
	I1105 10:45:28.161798   22916 main.go:141] libmachine: Creating machine...
	I1105 10:45:28.161807   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .Create
	I1105 10:45:28.161886   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:28.162040   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:45:28.161881   22939 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:45:28.162104   22916 main.go:141] libmachine: (force-systemd-flag-892000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:45:28.590164   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:45:28.590041   22939 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/id_rsa...
	I1105 10:45:28.664296   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:45:28.664229   22939 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk...
	I1105 10:45:28.664310   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Writing magic tar header
	I1105 10:45:28.664321   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Writing SSH key tar header
	I1105 10:45:28.664923   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:45:28.664880   22939 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000 ...
	I1105 10:45:29.050671   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:29.050684   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid
	I1105 10:45:29.050741   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Using UUID 8dd8a56f-e503-431c-afca-c7577d6be7b1
	I1105 10:45:29.074404   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Generated MAC 6e:29:1f:49:f4:a0
	I1105 10:45:29.074435   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000
	I1105 10:45:29.074499   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8dd8a56f-e503-431c-afca-c7577d6be7b1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000ac690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:45:29.074537   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8dd8a56f-e503-431c-afca-c7577d6be7b1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000ac690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:45:29.074579   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8dd8a56f-e503-431c-afca-c7577d6be7b1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machi
nes/force-systemd-flag-892000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000"}
	I1105 10:45:29.074608   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8dd8a56f-e503-431c-afca-c7577d6be7b1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage,/Users/jenkins/minikube-
integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000"
	I1105 10:45:29.074625   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:45:29.077718   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 DEBUG: hyperkit: Pid is 22953
	I1105 10:45:29.078217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 0
	I1105 10:45:29.078231   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:29.078312   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:29.079457   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:29.079601   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:29.079617   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:29.079647   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:29.079668   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:29.079688   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:29.079701   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:29.079708   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:29.079716   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:29.079729   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:29.079740   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:29.079756   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:29.079780   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:29.079790   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:29.079799   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:29.079808   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:29.079814   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:29.079823   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:29.079831   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:29.079841   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:29.079851   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:29.087871   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:45:29.096265   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:45:29.097338   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:45:29.097356   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:45:29.097371   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:45:29.097390   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:45:29.479779   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:45:29.479794   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:45:29.594426   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:45:29.594455   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:45:29.594493   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:45:29.594507   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:45:29.595305   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:45:29.595316   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:29 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:45:31.081785   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 1
	I1105 10:45:31.081801   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:31.081883   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:31.082882   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:31.082974   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:31.082983   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:31.082992   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:31.082997   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:31.083005   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:31.083010   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:31.083026   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:31.083040   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:31.083050   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:31.083057   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:31.083073   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:31.083081   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:31.083089   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:31.083099   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:31.083106   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:31.083116   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:31.083124   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:31.083131   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:31.083141   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:31.083150   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:33.083295   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 2
	I1105 10:45:33.083328   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:33.083476   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:33.084417   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:33.084502   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:33.084513   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:33.084523   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:33.084551   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:33.084564   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:33.084582   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:33.084590   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:33.084596   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:33.084602   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:33.084609   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:33.084618   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:33.084631   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:33.084638   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:33.084644   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:33.084651   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:33.084657   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:33.084664   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:33.084676   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:33.084689   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:33.084698   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:34.953220   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:45:34.953340   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:45:34.953349   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:45:34.972930   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:45:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:45:35.085730   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 3
	I1105 10:45:35.085757   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:35.085949   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:35.087732   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:35.087907   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:35.087921   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:35.087936   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:35.087944   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:35.087953   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:35.087962   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:35.087972   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:35.087986   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:35.087995   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:35.088016   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:35.088040   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:35.088051   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:35.088061   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:35.088070   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:35.088081   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:35.088108   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:35.088123   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:35.088133   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:35.088142   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:35.088158   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:37.090107   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 4
	I1105 10:45:37.090122   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:37.090219   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:37.091179   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:37.091285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:37.091295   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:37.091303   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:37.091309   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:37.091326   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:37.091338   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:37.091348   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:37.091357   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:37.091379   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:37.091391   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:37.091419   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:37.091431   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:37.091439   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:37.091450   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:37.091458   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:37.091464   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:37.091472   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:37.091478   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:37.091484   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:37.091498   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:39.093587   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 5
	I1105 10:45:39.093602   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:39.093611   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:39.094556   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:39.094658   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:39.094669   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:39.094676   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:39.094681   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:39.094697   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:39.094703   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:39.094711   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:39.094719   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:39.094734   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:39.094745   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:39.094752   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:39.094771   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:39.094783   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:39.094791   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:39.094798   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:39.094805   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:39.094810   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:39.094818   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:39.094829   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:39.094835   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:41.096935   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 6
	I1105 10:45:41.096950   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:41.096980   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:41.097928   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:41.097992   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:41.098002   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:41.098018   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:41.098027   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:41.098034   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:41.098039   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:41.098046   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:41.098051   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:41.098059   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:41.098066   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:41.098073   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:41.098080   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:41.098087   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:41.098093   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:41.098108   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:41.098121   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:41.098129   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:41.098143   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:41.098150   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:41.098157   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:43.098354   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 7
	I1105 10:45:43.098371   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:43.098433   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:43.099398   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:43.099460   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:43.099470   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:43.099480   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:43.099488   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:43.099501   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:43.099510   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:43.099524   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:43.099535   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:43.099545   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:43.099555   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:43.099562   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:43.099569   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:43.099583   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:43.099600   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:43.099653   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:43.099673   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:43.099683   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:43.099689   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:43.099707   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:43.099720   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:45.100433   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 8
	I1105 10:45:45.100446   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:45.100502   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:45.101469   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:45.101537   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:45.101548   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:45.101558   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:45.101567   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:45.101592   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:45.101611   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:45.101624   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:45.101632   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:45.101639   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:45.101646   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:45.101652   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:45.101658   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:45.101666   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:45.101674   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:45.101682   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:45.101694   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:45.101702   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:45.101709   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:45.101716   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:45.101723   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:47.102471   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 9
	I1105 10:45:47.102484   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:47.102538   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:47.103534   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:47.103591   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:47.103601   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:47.103607   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:47.103614   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:47.103621   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:47.103654   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:47.103664   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:47.103673   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:47.103679   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:47.103697   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:47.103712   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:47.103720   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:47.103728   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:47.103734   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:47.103743   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:47.103759   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:47.103766   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:47.103788   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:47.103801   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:47.103810   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:49.105791   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 10
	I1105 10:45:49.105806   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:49.105835   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:49.106801   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:49.106874   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:49.106886   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:49.106897   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:49.106907   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:49.106916   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:49.106924   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:49.106932   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:49.106939   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:49.106960   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:49.106979   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:49.106992   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:49.106999   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:49.107008   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:49.107015   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:49.107031   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:49.107038   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:49.107046   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:49.107054   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:49.107062   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:49.107070   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:51.109099   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 11
	I1105 10:45:51.109118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:51.109217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:51.110119   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:51.110208   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:51.110216   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:51.110223   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:51.110229   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:51.110235   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:51.110243   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:51.110253   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:51.110268   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:51.110278   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:51.110286   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:51.110292   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:51.110298   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:51.110310   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:51.110321   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:51.110331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:51.110339   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:51.110358   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:51.110373   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:51.110380   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:51.110389   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:53.112486   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 12
	I1105 10:45:53.112500   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:53.112527   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:53.113486   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:53.113574   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:53.113582   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:53.113591   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:53.113597   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:53.113603   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:53.113609   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:53.113615   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:53.113637   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:53.113657   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:53.113670   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:53.113679   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:53.113686   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:53.113705   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:53.113713   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:53.113720   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:53.113728   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:53.113734   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:53.113740   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:53.113755   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:53.113765   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:55.114370   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 13
	I1105 10:45:55.114387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:55.114459   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:55.115387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:55.115465   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:55.115475   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:55.115484   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:55.115492   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:55.115499   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:55.115505   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:55.115512   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:55.115517   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:55.115535   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:55.115547   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:55.115554   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:55.115560   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:55.115568   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:55.115576   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:55.115582   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:55.115600   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:55.115608   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:55.115623   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:55.115634   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:55.115643   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:57.117733   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 14
	I1105 10:45:57.117746   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:57.117780   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:57.118725   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:57.118809   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:57.118821   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:57.118828   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:57.118833   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:57.118854   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:57.118863   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:57.118869   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:57.118875   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:57.118895   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:57.118910   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:57.118919   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:57.118928   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:57.118940   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:57.118954   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:57.118963   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:57.118971   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:57.118980   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:57.118987   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:57.118997   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:57.119009   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:59.121018   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 15
	I1105 10:45:59.121034   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:59.121067   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:45:59.121995   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:45:59.122106   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:59.122118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:59.122124   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:59.122131   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:59.122143   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:59.122150   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:59.122176   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:59.122184   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:59.122192   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:59.122202   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:59.122208   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:59.122217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:59.122238   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:59.122251   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:59.122259   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:59.122267   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:59.122282   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:59.122292   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:59.122304   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:59.122310   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:01.122368   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 16
	I1105 10:46:01.122382   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:01.122449   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:01.123406   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:01.123464   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:01.123473   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:01.123482   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:01.123489   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:01.123525   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:01.123538   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:01.123553   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:01.123561   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:01.123573   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:01.123585   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:01.123594   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:01.123600   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:01.123606   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:01.123612   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:01.123621   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:01.123636   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:01.123648   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:01.123656   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:01.123665   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:01.123673   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:03.123704   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 17
	I1105 10:46:03.123724   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:03.123865   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:03.124791   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:03.124921   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:03.124944   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:03.124951   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:03.124958   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:03.124966   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:03.124972   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:03.124985   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:03.124995   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:03.125001   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:03.125011   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:03.125021   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:03.125032   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:03.125038   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:03.125046   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:03.125053   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:03.125061   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:03.125086   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:03.125100   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:03.125108   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:03.125117   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:05.125377   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 18
	I1105 10:46:05.125393   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:05.125402   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:05.126401   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:05.126514   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:05.126530   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:05.126537   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:05.126544   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:05.126550   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:05.126560   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:05.126575   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:05.126586   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:05.126606   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:05.126614   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:05.126621   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:05.126630   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:05.126637   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:05.126644   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:05.126666   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:05.126678   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:05.126685   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:05.126693   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:05.126699   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:05.126704   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:07.128725   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 19
	I1105 10:46:07.128740   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:07.128807   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:07.129822   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:07.129900   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:07.129910   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:07.129929   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:07.129938   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:07.129945   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:07.129953   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:07.129959   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:07.129965   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:07.129971   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:07.129977   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:07.129992   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:07.130003   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:07.130013   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:07.130020   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:07.130035   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:07.130048   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:07.130056   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:07.130063   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:07.130070   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:07.130086   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:09.131167   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 20
	I1105 10:46:09.131183   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:09.131243   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:09.132175   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:09.132262   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:09.132270   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:09.132277   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:09.132282   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:09.132288   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:09.132294   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:09.132312   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:09.132324   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:09.132333   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:09.132341   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:09.132348   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:09.132355   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:09.132368   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:09.132378   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:09.132395   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:09.132406   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:09.132414   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:09.132421   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:09.132428   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:09.132433   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:11.132542   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 21
	I1105 10:46:11.132558   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:11.132629   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:11.133577   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:11.133661   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:11.133671   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:11.133677   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:11.133683   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:11.133696   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:11.133704   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:11.133712   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:11.133723   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:11.133742   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:11.133755   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:11.133763   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:11.133771   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:11.133777   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:11.133795   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:11.133816   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:11.133830   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:11.133837   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:11.133843   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:11.133855   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:11.133867   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:13.135904   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 22
	I1105 10:46:13.135917   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:13.135951   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:13.136904   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:13.136967   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:13.136975   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:13.136988   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:13.136997   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:13.137008   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:13.137016   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:13.137026   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:13.137033   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:13.137040   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:13.137059   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:13.137071   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:13.137079   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:13.137085   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:13.137092   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:13.137100   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:13.137115   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:13.137127   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:13.137134   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:13.137143   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:13.137156   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:15.139285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 23
	I1105 10:46:15.139300   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:15.139338   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:15.140291   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:15.140352   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:15.140363   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:15.140372   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:15.140380   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:15.140389   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:15.140407   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:15.140418   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:15.140425   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:15.140431   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:15.140446   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:15.140462   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:15.140467   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:15.140474   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:15.140481   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:15.140487   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:15.140492   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:15.140498   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:15.140505   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:15.140511   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:15.140516   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:17.141226   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 24
	I1105 10:46:17.141241   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:17.141341   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:17.142244   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:17.142365   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:17.142374   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:17.142381   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:17.142386   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:17.142392   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:17.142398   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:17.142422   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:17.142431   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:17.142438   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:17.142445   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:17.142452   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:17.142459   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:17.142466   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:17.142471   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:17.142481   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:17.142496   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:17.142516   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:17.142524   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:17.142532   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:17.142542   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:19.144463   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 25
	I1105 10:46:19.144478   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:19.144532   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:19.145474   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:19.145558   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:19.145566   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:19.145576   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:19.145585   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:19.145593   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:19.145599   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:19.145626   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:19.145640   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:19.145650   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:19.145658   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:19.145669   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:19.145677   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:19.145691   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:19.145701   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:19.145715   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:19.145726   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:19.145733   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:19.145738   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:19.145747   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:19.145757   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:21.146623   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 26
	I1105 10:46:21.146638   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:21.146691   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:21.147663   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:21.147717   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:21.147727   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:21.147743   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:21.147750   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:21.147757   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:21.147763   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:21.147768   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:21.147775   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:21.147780   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:21.147795   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:21.147809   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:21.147817   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:21.147824   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:21.147831   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:21.147838   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:21.147856   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:21.147870   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:21.147879   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:21.147887   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:21.147895   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:23.149234   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 27
	I1105 10:46:23.149246   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:23.149256   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:23.150218   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:23.150313   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:23.150321   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:23.150331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:23.150340   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:23.150347   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:23.150355   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:23.150361   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:23.150370   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:23.150378   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:23.150401   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:23.150416   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:23.150426   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:23.150434   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:23.150444   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:23.150450   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:23.150457   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:23.150463   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:23.150491   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:23.150501   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:23.150508   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:25.152600   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 28
	I1105 10:46:25.152616   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:25.152680   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:25.153627   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:25.153702   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:25.153714   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:25.153750   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:25.153760   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:25.153782   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:25.153792   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:25.153808   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:25.153815   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:25.153822   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:25.153831   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:25.153839   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:25.153846   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:25.153853   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:25.153860   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:25.153868   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:25.153874   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:25.153880   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:25.153892   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:25.153903   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:25.153912   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:27.155829   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 29
	I1105 10:46:27.155844   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:27.155898   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:27.156835   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 6e:29:1f:49:f4:a0 in /var/db/dhcpd_leases ...
	I1105 10:46:27.156950   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:46:27.156960   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:46:27.156967   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:46:27.156998   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:46:27.157014   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:46:27.157023   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:46:27.157029   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:46:27.157042   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:46:27.157050   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:46:27.157058   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:46:27.157067   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:46:27.157074   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:46:27.157081   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:46:27.157088   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:46:27.157095   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:46:27.157103   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:46:27.157120   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:46:27.157130   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:46:27.157144   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:46:27.157163   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:46:29.159214   22916 client.go:171] duration metric: took 1m1.004029026s to LocalClient.Create
	I1105 10:46:31.161392   22916 start.go:128] duration metric: took 1m3.059925562s to createHost
	I1105 10:46:31.161412   22916 start.go:83] releasing machines lock for "force-systemd-flag-892000", held for 1m3.060057142s
	W1105 10:46:31.161426   22916 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:29:1f:49:f4:a0
	I1105 10:46:31.161791   22916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:46:31.161813   22916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:46:31.173024   22916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60821
	I1105 10:46:31.173337   22916 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:46:31.173674   22916 main.go:141] libmachine: Using API Version  1
	I1105 10:46:31.173683   22916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:46:31.173909   22916 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:46:31.174329   22916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:46:31.174353   22916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:46:31.185301   22916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60823
	I1105 10:46:31.185615   22916 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:46:31.185945   22916 main.go:141] libmachine: Using API Version  1
	I1105 10:46:31.185961   22916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:46:31.186171   22916 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:46:31.186323   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .GetState
	I1105 10:46:31.186448   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.186511   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:31.187686   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .DriverName
	I1105 10:46:31.208964   22916 out.go:177] * Deleting "force-systemd-flag-892000" in hyperkit ...
	I1105 10:46:31.250824   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .Remove
	I1105 10:46:31.250977   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.250991   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.251050   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:31.252223   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:31.252259   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | waiting for graceful shutdown
	I1105 10:46:32.252714   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:32.252799   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:32.253942   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | waiting for graceful shutdown
	I1105 10:46:33.254118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:33.254201   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:33.255996   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | waiting for graceful shutdown
	I1105 10:46:34.257344   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:34.257435   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:34.258251   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | waiting for graceful shutdown
	I1105 10:46:35.260406   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:35.260491   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:35.261640   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | waiting for graceful shutdown
	I1105 10:46:36.262376   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:46:36.262455   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22953
	I1105 10:46:36.263165   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | sending sigkill
	I1105 10:46:36.263175   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1105 10:46:36.274116   22916 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:29:1f:49:f4:a0
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6e:29:1f:49:f4:a0
	I1105 10:46:36.274135   22916 start.go:729] Will try again in 5 seconds ...
	I1105 10:46:36.307224   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:46:36 WARN : hyperkit: failed to read stdout: EOF
	I1105 10:46:36.307242   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:46:36 WARN : hyperkit: failed to read stderr: EOF
	I1105 10:46:41.275860   22916 start.go:360] acquireMachinesLock for force-systemd-flag-892000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:47:33.998787   22916 start.go:364] duration metric: took 52.721674087s to acquireMachinesLock for "force-systemd-flag-892000"
	I1105 10:47:33.998812   22916 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-892000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-892000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:47:33.998901   22916 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:47:34.020353   22916 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:47:34.020452   22916 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:47:34.020469   22916 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:47:34.032225   22916 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60831
	I1105 10:47:34.032555   22916 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:47:34.032892   22916 main.go:141] libmachine: Using API Version  1
	I1105 10:47:34.032904   22916 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:47:34.033123   22916 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:47:34.033210   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .GetMachineName
	I1105 10:47:34.033312   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .DriverName
	I1105 10:47:34.033423   22916 start.go:159] libmachine.API.Create for "force-systemd-flag-892000" (driver="hyperkit")
	I1105 10:47:34.033435   22916 client.go:168] LocalClient.Create starting
	I1105 10:47:34.033461   22916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:47:34.033523   22916 main.go:141] libmachine: Decoding PEM data...
	I1105 10:47:34.033534   22916 main.go:141] libmachine: Parsing certificate...
	I1105 10:47:34.033577   22916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:47:34.033630   22916 main.go:141] libmachine: Decoding PEM data...
	I1105 10:47:34.033638   22916 main.go:141] libmachine: Parsing certificate...
	I1105 10:47:34.033664   22916 main.go:141] libmachine: Running pre-create checks...
	I1105 10:47:34.033674   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .PreCreateCheck
	I1105 10:47:34.033746   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.033781   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .GetConfigRaw
	I1105 10:47:34.099265   22916 main.go:141] libmachine: Creating machine...
	I1105 10:47:34.099274   22916 main.go:141] libmachine: (force-systemd-flag-892000) Calling .Create
	I1105 10:47:34.099370   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.099533   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:47:34.099359   22990 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:47:34.099589   22916 main.go:141] libmachine: (force-systemd-flag-892000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:47:34.334692   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:47:34.334596   22990 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/id_rsa...
	I1105 10:47:34.386047   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:47:34.385979   22990 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk...
	I1105 10:47:34.386056   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Writing magic tar header
	I1105 10:47:34.386065   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Writing SSH key tar header
	I1105 10:47:34.386445   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | I1105 10:47:34.386403   22990 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000 ...
	I1105 10:47:34.767416   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.767441   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid
	I1105 10:47:34.767460   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Using UUID 5551cfde-13d2-4c10-b502-a9496d0ab1d7
	I1105 10:47:34.793090   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Generated MAC 76:ea:ca:74:5a:9f
	I1105 10:47:34.793118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000
	I1105 10:47:34.793214   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5551cfde-13d2-4c10-b502-a9496d0ab1d7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:47:34.793260   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5551cfde-13d2-4c10-b502-a9496d0ab1d7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:47:34.793389   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5551cfde-13d2-4c10-b502-a9496d0ab1d7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machi
nes/force-systemd-flag-892000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000"}
	I1105 10:47:34.793465   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5551cfde-13d2-4c10-b502-a9496d0ab1d7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/force-systemd-flag-892000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/bzimage,/Users/jenkins/minikube-
integration/19910-17277/.minikube/machines/force-systemd-flag-892000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-892000"
	I1105 10:47:34.793489   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:47:34.796563   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 DEBUG: hyperkit: Pid is 22992
	I1105 10:47:34.797752   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 0
	I1105 10:47:34.797769   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:34.797827   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:34.798880   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:34.799024   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:34.799047   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:34.799071   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:34.799083   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:34.799097   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:34.799118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:34.799147   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:34.799163   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:34.799184   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:34.799199   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:34.799216   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:34.799233   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:34.799247   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:34.799260   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:34.799272   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:34.799287   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:34.799304   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:34.799318   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:34.799331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:34.799353   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:34.808469   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:47:34.816896   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-flag-892000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:47:34.817928   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:47:34.817953   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:47:34.817965   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:47:34.817979   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:47:35.202096   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:47:35.202118   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:47:35.316975   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:47:35.316991   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:47:35.317002   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:47:35.317014   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:47:35.317874   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:47:35.317889   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:47:36.801238   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 1
	I1105 10:47:36.801265   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:36.801342   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:36.802403   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:36.802521   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:36.802529   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:36.802536   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:36.802543   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:36.802550   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:36.802555   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:36.802563   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:36.802569   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:36.802586   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:36.802598   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:36.802606   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:36.802614   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:36.802621   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:36.802629   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:36.802641   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:36.802656   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:36.802663   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:36.802669   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:36.802677   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:36.802686   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:38.803092   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 2
	I1105 10:47:38.803105   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:38.803167   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:38.804141   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:38.804212   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:38.804219   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:38.804226   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:38.804234   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:38.804253   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:38.804265   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:38.804285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:38.804291   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:38.804300   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:38.804309   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:38.804315   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:38.804323   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:38.804330   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:38.804337   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:38.804344   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:38.804351   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:38.804357   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:38.804370   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:38.804377   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:38.804382   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:40.660116   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:47:40.660197   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:47:40.660208   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:47:40.679845   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | 2024/11/05 10:47:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:47:40.805646   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 3
	I1105 10:47:40.805676   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:40.805878   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:40.807623   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:40.807806   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:40.807820   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:40.807829   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:40.807841   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:40.807854   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:40.807863   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:40.807871   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:40.807882   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:40.807890   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:40.807904   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:40.807934   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:40.807952   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:40.807983   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:40.807995   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:40.808021   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:40.808048   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:40.808059   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:40.808068   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:40.808078   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:40.808090   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:42.808001   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 4
	I1105 10:47:42.808017   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:42.808095   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:42.809078   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:42.809188   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:42.809200   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:42.809220   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:42.809228   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:42.809234   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:42.809240   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:42.809247   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:42.809252   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:42.809276   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:42.809285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:42.809294   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:42.809301   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:42.809308   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:42.809314   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:42.809320   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:42.809326   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:42.809333   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:42.809340   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:42.809346   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:42.809352   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:44.811453   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 5
	I1105 10:47:44.811469   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:44.811526   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:44.812691   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:44.812782   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:44.812795   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:44.812816   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:44.812826   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:44.812834   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:44.812839   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:44.812857   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:44.812872   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:44.812895   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:44.812905   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:44.812923   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:44.812935   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:44.812957   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:44.812982   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:44.812993   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:44.813002   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:44.813010   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:44.813034   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:44.813042   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:44.813051   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:46.813564   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 6
	I1105 10:47:46.813580   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:46.813642   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:46.814575   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:46.814725   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:46.814737   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:46.814747   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:46.814756   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:46.814766   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:46.814772   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:46.814779   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:46.814786   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:46.814797   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:46.814806   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:46.814813   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:46.814819   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:46.814832   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:46.814843   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:46.814851   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:46.814859   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:46.814877   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:46.814888   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:46.814903   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:46.814915   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:48.815589   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 7
	I1105 10:47:48.815611   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:48.815692   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:48.816692   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:48.816778   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:48.816789   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:48.816796   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:48.816804   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:48.816811   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:48.816820   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:48.816826   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:48.816833   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:48.816847   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:48.816859   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:48.816867   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:48.816876   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:48.816883   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:48.816890   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:48.816902   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:48.816910   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:48.816917   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:48.816923   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:48.816930   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:48.816938   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:50.817075   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 8
	I1105 10:47:50.817087   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:50.817149   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:50.818110   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:50.818191   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:50.818200   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:50.818212   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:50.818217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:50.818232   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:50.818240   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:50.818247   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:50.818255   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:50.818262   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:50.818269   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:50.818287   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:50.818299   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:50.818312   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:50.818320   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:50.818327   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:50.818332   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:50.818350   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:50.818362   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:50.818370   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:50.818378   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:52.819891   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 9
	I1105 10:47:52.819917   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:52.819955   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:52.820930   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:52.820985   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:52.820998   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:52.821007   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:52.821014   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:52.821041   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:52.821052   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:52.821058   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:52.821066   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:52.821078   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:52.821086   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:52.821092   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:52.821100   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:52.821112   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:52.821119   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:52.821125   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:52.821133   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:52.821140   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:52.821147   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:52.821163   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:52.821176   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:54.823198   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 10
	I1105 10:47:54.823210   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:54.823283   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:54.824254   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:54.824325   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:54.824335   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:54.824348   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:54.824363   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:54.824373   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:54.824380   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:54.824387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:54.824402   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:54.824412   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:54.824435   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:54.824447   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:54.824456   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:54.824462   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:54.824471   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:54.824481   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:54.824490   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:54.824513   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:54.824523   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:54.824537   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:54.824551   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:56.826554   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 11
	I1105 10:47:56.826568   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:56.826645   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:56.827922   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:56.828007   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:56.828021   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:56.828047   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:56.828056   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:56.828071   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:56.828091   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:56.828101   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:56.828107   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:56.828123   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:56.828135   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:56.828144   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:56.828158   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:56.828166   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:56.828174   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:56.828180   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:56.828188   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:56.828198   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:56.828206   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:56.828213   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:56.828219   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:47:58.829056   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 12
	I1105 10:47:58.829071   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:47:58.829132   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:47:58.830139   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:47:58.830239   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:47:58.830258   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:47:58.830276   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:47:58.830283   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:47:58.830293   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:47:58.830305   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:47:58.830315   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:47:58.830331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:47:58.830339   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:47:58.830345   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:47:58.830351   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:47:58.830360   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:47:58.830367   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:47:58.830375   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:47:58.830381   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:47:58.830387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:47:58.830393   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:47:58.830398   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:47:58.830411   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:47:58.830424   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:00.831937   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 13
	I1105 10:48:00.831957   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:00.832022   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:00.833005   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:00.833092   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:00.833100   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:00.833106   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:00.833112   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:00.833157   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:00.833175   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:00.833201   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:00.833218   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:00.833228   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:00.833235   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:00.833241   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:00.833248   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:00.833255   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:00.833263   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:00.833270   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:00.833276   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:00.833282   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:00.833290   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:00.833297   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:00.833304   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:02.833541   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 14
	I1105 10:48:02.833555   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:02.833613   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:02.834596   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:02.834686   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:02.834695   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:02.834702   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:02.834710   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:02.834722   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:02.834729   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:02.834735   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:02.834744   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:02.834757   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:02.834765   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:02.834771   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:02.834779   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:02.834791   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:02.834798   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:02.834805   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:02.834813   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:02.834829   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:02.834840   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:02.834848   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:02.834854   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:04.835186   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 15
	I1105 10:48:04.835199   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:04.835260   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:04.836199   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:04.836294   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:04.836310   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:04.836320   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:04.836325   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:04.836331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:04.836336   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:04.836343   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:04.836348   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:04.836360   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:04.836367   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:04.836373   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:04.836391   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:04.836403   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:04.836410   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:04.836421   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:04.836430   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:04.836438   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:04.836444   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:04.836452   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:04.836464   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:06.836663   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 16
	I1105 10:48:06.836678   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:06.836745   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:06.837706   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:06.837797   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:06.837806   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:06.837818   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:06.837826   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:06.837834   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:06.837839   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:06.837845   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:06.837852   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:06.837863   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:06.837871   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:06.837888   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:06.837900   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:06.837908   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:06.837914   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:06.837927   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:06.837939   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:06.837948   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:06.837956   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:06.837962   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:06.837968   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:08.838464   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 17
	I1105 10:48:08.838483   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:08.838539   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:08.839519   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:08.839689   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:08.839696   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:08.839702   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:08.839707   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:08.839713   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:08.839735   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:08.839743   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:08.839751   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:08.839757   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:08.839770   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:08.839778   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:08.839785   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:08.839790   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:08.839799   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:08.839820   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:08.839832   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:08.839840   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:08.839848   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:08.839863   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:08.839871   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:10.841979   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 18
	I1105 10:48:10.841992   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:10.842054   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:10.842996   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:10.843096   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:10.843126   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:10.843148   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:10.843162   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:10.843178   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:10.843185   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:10.843198   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:10.843205   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:10.843213   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:10.843220   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:10.843228   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:10.843235   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:10.843241   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:10.843250   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:10.843258   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:10.843266   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:10.843279   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:10.843295   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:10.843312   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:10.843327   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:12.845380   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 19
	I1105 10:48:12.845395   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:12.845444   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:12.846661   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:12.846731   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:12.846740   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:12.846748   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:12.846754   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:12.846761   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:12.846776   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:12.846784   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:12.846791   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:12.846798   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:12.846810   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:12.846819   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:12.846828   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:12.846836   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:12.846852   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:12.846865   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:12.846873   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:12.846881   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:12.846888   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:12.846899   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:12.846916   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:14.848902   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 20
	I1105 10:48:14.848917   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:14.848989   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:14.849987   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:14.850130   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:14.850143   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:14.850150   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:14.850167   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:14.850180   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:14.850190   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:14.850201   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:14.850214   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:14.850228   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:14.850237   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:14.850244   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:14.850252   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:14.850264   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:14.850285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:14.850292   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:14.850310   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:14.850324   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:14.850331   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:14.850339   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:14.850347   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:16.852355   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 21
	I1105 10:48:16.852371   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:16.852404   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:16.853347   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:16.853444   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:16.853454   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:16.853461   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:16.853466   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:16.853473   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:16.853478   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:16.853484   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:16.853490   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:16.853496   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:16.853502   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:16.853508   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:16.853518   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:16.853525   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:16.853533   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:16.853541   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:16.853553   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:16.853559   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:16.853574   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:16.853585   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:16.853594   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:18.855536   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 22
	I1105 10:48:18.855551   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:18.855618   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:18.856581   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:18.856716   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:18.856725   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:18.856732   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:18.856737   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:18.856745   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:18.856751   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:18.856758   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:18.856767   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:18.856774   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:18.856780   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:18.856798   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:18.856810   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:18.856816   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:18.856831   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:18.856847   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:18.856854   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:18.856862   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:18.856868   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:18.856877   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:18.856888   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:20.857167   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 23
	I1105 10:48:20.857180   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:20.857252   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:20.858212   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:20.858277   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:20.858285   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:20.858297   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:20.858304   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:20.858330   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:20.858341   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:20.858355   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:20.858362   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:20.858369   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:20.858379   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:20.858387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:20.858394   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:20.858399   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:20.858405   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:20.858411   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:20.858421   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:20.858436   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:20.858448   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:20.858466   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:20.858478   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:22.858496   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 24
	I1105 10:48:22.858510   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:22.858567   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:22.859543   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:22.859615   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:22.859625   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:22.859636   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:22.859645   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:22.859652   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:22.859659   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:22.859667   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:22.859682   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:22.859691   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:22.859698   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:22.859705   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:22.859714   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:22.859722   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:22.859728   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:22.859735   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:22.859746   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:22.859754   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:22.859760   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:22.859767   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:22.859789   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:24.861869   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 25
	I1105 10:48:24.861884   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:24.861947   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:24.863101   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:24.863217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:24.863236   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:24.863249   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:24.863259   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:24.863266   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:24.863282   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:24.863290   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:24.863295   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:24.863302   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:24.863308   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:24.863314   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:24.863323   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:24.863330   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:24.863348   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:24.863365   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:24.863376   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:24.863385   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:24.863400   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:24.863412   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:24.863422   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:26.865484   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 26
	I1105 10:48:26.865496   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:26.865555   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:26.866595   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:26.866681   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:26.866689   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:26.866699   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:26.866704   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:26.866718   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:26.866731   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:26.866744   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:26.866752   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:26.866765   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:26.866776   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:26.866784   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:26.866792   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:26.866799   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:26.866804   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:26.866822   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:26.866834   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:26.866850   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:26.866862   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:26.866869   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:26.866876   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:28.866937   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 27
	I1105 10:48:28.866955   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:28.867032   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:28.868153   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:28.868239   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:28.868272   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:28.868281   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:28.868289   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:28.868306   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:28.868318   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:28.868326   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:28.868338   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:28.868345   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:28.868352   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:28.868359   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:28.868366   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:28.868379   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:28.868389   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:28.868402   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:28.868411   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:28.868417   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:28.868424   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:28.868431   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:28.868436   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:30.868806   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 28
	I1105 10:48:30.868818   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:30.869387   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:30.869931   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:30.870061   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:30.870076   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:30.870094   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:30.870113   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:30.870127   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:30.870140   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:30.870167   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:30.870181   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:30.870262   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:30.870344   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:30.870416   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:30.870441   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:30.870453   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:30.870461   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:30.870522   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:30.870745   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:30.870764   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:30.870773   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:30.870780   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:30.870788   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:32.871923   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Attempt 29
	I1105 10:48:32.871940   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:48:32.871995   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | hyperkit pid from json: 22992
	I1105 10:48:32.872952   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Searching for 76:ea:ca:74:5a:9f in /var/db/dhcpd_leases ...
	I1105 10:48:32.873045   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:48:32.873058   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:48:32.873071   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:48:32.873077   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:48:32.873084   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:48:32.873091   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:48:32.873100   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:48:32.873106   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:48:32.873114   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:48:32.873120   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:48:32.873126   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:48:32.873132   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:48:32.873139   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:48:32.873145   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:48:32.873151   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:48:32.873183   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:48:32.873200   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:48:32.873208   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:48:32.873217   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:48:32.873237   22916 main.go:141] libmachine: (force-systemd-flag-892000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:48:34.875304   22916 client.go:171] duration metric: took 1m0.840443534s to LocalClient.Create
	I1105 10:48:36.877500   22916 start.go:128] duration metric: took 1m2.877098151s to createHost
	I1105 10:48:36.877549   22916 start.go:83] releasing machines lock for "force-systemd-flag-892000", held for 1m2.877267831s
	W1105 10:48:36.877623   22916 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-892000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:ea:ca:74:5a:9f
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-892000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:ea:ca:74:5a:9f
	I1105 10:48:36.940811   22916 out.go:201] 
	W1105 10:48:36.961916   22916 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:ea:ca:74:5a:9f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:ea:ca:74:5a:9f
	W1105 10:48:36.961927   22916 out.go:270] * 
	* 
	W1105 10:48:36.962576   22916 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:48:37.024724   22916 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-892000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-892000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-892000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (184.813441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-892000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-892000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-11-05 10:48:37.331363 -0800 PST m=+4093.847448015
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-892000 -n force-systemd-flag-892000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-892000 -n force-systemd-flag-892000: exit status 7 (95.720896ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:48:37.424742   23013 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:48:37.424767   23013 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-892000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-892000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-892000: (5.270958557s)
--- FAIL: TestForceSystemdFlag (252.11s)

                                                
                                    
x
+
TestForceSystemdEnv (233.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-817000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-817000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.133363023s)

                                                
                                                
-- stdout --
	* [force-systemd-env-817000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-817000" primary control-plane node in "force-systemd-env-817000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-817000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:41:40.185887   22854 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:41:40.186583   22854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:41:40.186592   22854 out.go:358] Setting ErrFile to fd 2...
	I1105 10:41:40.186599   22854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:41:40.186988   22854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:41:40.188949   22854 out.go:352] Setting JSON to false
	I1105 10:41:40.217027   22854 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9669,"bootTime":1730822431,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:41:40.217189   22854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:41:40.248638   22854 out.go:177] * [force-systemd-env-817000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:41:40.296786   22854 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:41:40.296841   22854 notify.go:220] Checking for updates...
	I1105 10:41:40.340904   22854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:41:40.361725   22854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:41:40.382944   22854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:41:40.403925   22854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:41:40.424768   22854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1105 10:41:40.446363   22854 config.go:182] Loaded profile config "offline-docker-052000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:41:40.446456   22854 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:41:40.477933   22854 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:41:40.535972   22854 start.go:297] selected driver: hyperkit
	I1105 10:41:40.535985   22854 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:41:40.535993   22854 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:41:40.541209   22854 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:41:40.541349   22854 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:41:40.551995   22854 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:41:40.558213   22854 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:41:40.558248   22854 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:41:40.558277   22854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:41:40.558497   22854 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 10:41:40.558522   22854 cni.go:84] Creating CNI manager for ""
	I1105 10:41:40.558568   22854 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 10:41:40.558581   22854 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 10:41:40.558642   22854 start.go:340] cluster config:
	{Name:force-systemd-env-817000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:41:40.558725   22854 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:41:40.600879   22854 out.go:177] * Starting "force-systemd-env-817000" primary control-plane node in "force-systemd-env-817000" cluster
	I1105 10:41:40.621920   22854 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:41:40.621946   22854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:41:40.621959   22854 cache.go:56] Caching tarball of preloaded images
	I1105 10:41:40.622061   22854 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:41:40.622069   22854 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:41:40.622133   22854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/force-systemd-env-817000/config.json ...
	I1105 10:41:40.622150   22854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/force-systemd-env-817000/config.json: {Name:mkefb142582366656d354086fa6b3ba25d306a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:41:40.622474   22854 start.go:360] acquireMachinesLock for force-systemd-env-817000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:42:19.209074   22854 start.go:364] duration metric: took 38.585886649s to acquireMachinesLock for "force-systemd-env-817000"
	I1105 10:42:19.209113   22854 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:42:19.209167   22854 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:42:19.230768   22854 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:42:19.230938   22854 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:42:19.230980   22854 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:42:19.241789   22854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60785
	I1105 10:42:19.242116   22854 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:42:19.242523   22854 main.go:141] libmachine: Using API Version  1
	I1105 10:42:19.242532   22854 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:42:19.242764   22854 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:42:19.242885   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .GetMachineName
	I1105 10:42:19.242995   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .DriverName
	I1105 10:42:19.243125   22854 start.go:159] libmachine.API.Create for "force-systemd-env-817000" (driver="hyperkit")
	I1105 10:42:19.243154   22854 client.go:168] LocalClient.Create starting
	I1105 10:42:19.243198   22854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:42:19.243265   22854 main.go:141] libmachine: Decoding PEM data...
	I1105 10:42:19.243280   22854 main.go:141] libmachine: Parsing certificate...
	I1105 10:42:19.243340   22854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:42:19.243385   22854 main.go:141] libmachine: Decoding PEM data...
	I1105 10:42:19.243393   22854 main.go:141] libmachine: Parsing certificate...
	I1105 10:42:19.243411   22854 main.go:141] libmachine: Running pre-create checks...
	I1105 10:42:19.243420   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .PreCreateCheck
	I1105 10:42:19.243514   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.243686   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .GetConfigRaw
	I1105 10:42:19.272506   22854 main.go:141] libmachine: Creating machine...
	I1105 10:42:19.272517   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .Create
	I1105 10:42:19.272623   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.272831   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:42:19.272620   22872 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:42:19.272871   22854 main.go:141] libmachine: (force-systemd-env-817000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:42:19.506270   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:42:19.506195   22872 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/id_rsa...
	I1105 10:42:19.555461   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:42:19.555387   22872 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk...
	I1105 10:42:19.555470   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Writing magic tar header
	I1105 10:42:19.555480   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Writing SSH key tar header
	I1105 10:42:19.555879   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:42:19.555838   22872 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000 ...
	I1105 10:42:19.933759   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.933782   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid
	I1105 10:42:19.933836   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Using UUID faef3dd6-b815-467c-a940-47a825936e9c
	I1105 10:42:19.959657   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Generated MAC 6a:8a:d9:be:c4:d1
	I1105 10:42:19.959677   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000
	I1105 10:42:19.959709   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"faef3dd6-b815-467c-a940-47a825936e9c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b25a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:42:19.959738   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"faef3dd6-b815-467c-a940-47a825936e9c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b25a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:42:19.959792   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "faef3dd6-b815-467c-a940-47a825936e9c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/for
ce-systemd-env-817000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000"}
	I1105 10:42:19.959822   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U faef3dd6-b815-467c-a940-47a825936e9c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage,/Users/jenkins/minikube-integrat
ion/19910-17277/.minikube/machines/force-systemd-env-817000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000"
	I1105 10:42:19.959851   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:42:19.962861   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 DEBUG: hyperkit: Pid is 22873
	I1105 10:42:19.963283   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 0
	I1105 10:42:19.963296   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:19.963360   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:19.964409   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:19.964474   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:19.964487   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:19.964521   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:19.964537   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:19.964556   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:19.964573   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:19.964584   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:19.964598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:19.964610   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:19.964625   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:19.964640   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:19.964655   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:19.964663   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:19.964668   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:19.964690   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:19.964699   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:19.964705   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:19.964713   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:19.964741   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:19.964754   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:19.973718   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:42:19.982273   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:42:19.983512   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:42:19.983540   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:42:19.983552   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:42:19.983574   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:42:20.367106   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:42:20.367121   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:42:20.481783   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:42:20.481807   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:42:20.481832   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:42:20.481851   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:42:20.482636   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:42:20.482647   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:42:21.965975   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 1
	I1105 10:42:21.965988   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:21.966036   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:21.966987   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:21.967084   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:21.967094   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:21.967119   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:21.967127   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:21.967134   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:21.967143   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:21.967149   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:21.967155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:21.967161   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:21.967167   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:21.967173   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:21.967192   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:21.967200   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:21.967207   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:21.967214   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:21.967226   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:21.967238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:21.967246   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:21.967254   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:21.967262   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:23.967716   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 2
	I1105 10:42:23.967743   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:23.967781   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:23.968789   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:23.968833   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:23.968840   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:23.968861   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:23.968872   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:23.968882   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:23.968896   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:23.968910   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:23.968922   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:23.968935   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:23.968942   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:23.968948   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:23.968960   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:23.968968   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:23.968976   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:23.968982   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:23.968987   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:23.968997   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:23.969016   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:23.969024   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:23.969032   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:25.835484   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:42:25.835548   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:42:25.835560   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:42:25.855142   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:42:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:42:25.970504   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 3
	I1105 10:42:25.970560   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:25.970799   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:25.972540   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:25.972715   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:25.972728   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:25.972737   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:25.972745   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:25.972754   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:25.972762   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:25.972793   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:25.972814   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:25.972826   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:25.972834   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:25.972843   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:25.972854   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:25.972865   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:25.972873   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:25.972884   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:25.972893   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:25.972903   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:25.972912   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:25.972922   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:25.972933   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:27.972990   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 4
	I1105 10:42:27.973005   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:27.973046   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:27.974015   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:27.974100   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:27.974110   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:27.974124   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:27.974133   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:27.974139   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:27.974145   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:27.974165   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:27.974175   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:27.974183   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:27.974190   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:27.974199   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:27.974217   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:27.974232   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:27.974244   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:27.974252   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:27.974260   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:27.974275   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:27.974286   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:27.974298   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:27.974307   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:29.976342   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 5
	I1105 10:42:29.976356   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:29.976425   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:29.977385   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:29.977459   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:29.977468   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:29.977475   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:29.977480   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:29.977487   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:29.977492   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:29.977498   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:29.977503   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:29.977519   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:29.977528   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:29.977542   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:29.977556   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:29.977564   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:29.977571   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:29.977586   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:29.977598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:29.977619   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:29.977630   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:29.977638   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:29.977646   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:31.979670   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 6
	I1105 10:42:31.979695   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:31.979725   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:31.980704   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:31.980770   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:31.980778   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:31.980785   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:31.980791   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:31.980817   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:31.980829   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:31.980840   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:31.980856   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:31.980874   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:31.980888   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:31.980896   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:31.980903   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:31.980910   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:31.980933   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:31.980945   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:31.980964   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:31.980971   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:31.980979   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:31.980994   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:31.981005   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:33.981741   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 7
	I1105 10:42:33.981756   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:33.981822   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:33.982790   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:33.982866   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:33.982877   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:33.982891   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:33.982897   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:33.982904   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:33.982910   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:33.982917   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:33.982923   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:33.982930   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:33.982935   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:33.982941   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:33.982948   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:33.982976   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:33.982988   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:33.983004   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:33.983016   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:33.983024   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:33.983032   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:33.983038   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:33.983046   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:35.983309   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 8
	I1105 10:42:35.983323   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:35.983394   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:35.984366   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:35.984446   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:35.984458   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:35.984469   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:35.984480   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:35.984501   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:35.984513   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:35.984526   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:35.984534   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:35.984541   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:35.984555   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:35.984575   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:35.984585   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:35.984593   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:35.984601   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:35.984608   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:35.984622   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:35.984637   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:35.984653   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:35.984662   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:35.984670   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:37.985236   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 9
	I1105 10:42:37.985253   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:37.985331   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:37.986278   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:37.986366   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:37.986377   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:37.986383   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:37.986389   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:37.986397   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:37.986421   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:37.986431   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:37.986439   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:37.986446   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:37.986451   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:37.986464   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:37.986475   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:37.986485   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:37.986492   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:37.986513   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:37.986524   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:37.986531   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:37.986536   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:37.986543   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:37.986551   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:39.988059   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 10
	I1105 10:42:39.988074   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:39.988185   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:39.989096   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:39.989175   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:39.989184   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:39.989192   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:39.989198   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:39.989205   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:39.989210   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:39.989216   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:39.989223   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:39.989237   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:39.989252   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:39.989259   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:39.989267   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:39.989276   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:39.989284   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:39.989290   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:39.989297   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:39.989304   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:39.989312   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:39.989328   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:39.989336   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:41.989598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 11
	I1105 10:42:41.989615   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:41.989687   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:41.990627   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:41.990713   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:41.990724   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:41.990733   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:41.990740   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:41.990746   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:41.990752   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:41.990758   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:41.990765   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:41.990779   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:41.990788   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:41.990804   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:41.990819   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:41.990827   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:41.990833   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:41.990850   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:41.990865   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:41.990873   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:41.990878   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:41.990886   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:41.990894   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:43.992946   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 12
	I1105 10:42:43.992959   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:43.992969   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:43.993982   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:43.994049   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:43.994062   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:43.994071   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:43.994077   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:43.994084   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:43.994091   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:43.994096   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:43.994114   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:43.994126   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:43.994136   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:43.994143   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:43.994149   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:43.994155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:43.994166   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:43.994172   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:43.994178   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:43.994183   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:43.994188   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:43.994195   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:43.994202   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:45.995571   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 13
	I1105 10:42:45.995584   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:45.995679   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:45.996625   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:45.996710   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:45.996719   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:45.996727   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:45.996732   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:45.996748   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:45.996753   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:45.996760   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:45.996765   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:45.996781   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:45.996793   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:45.996802   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:45.996810   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:45.996834   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:45.996846   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:45.996853   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:45.996866   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:45.996873   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:45.996880   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:45.996889   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:45.996897   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:47.998212   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 14
	I1105 10:42:47.998226   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:47.998291   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:47.999241   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:47.999320   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:47.999333   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:47.999358   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:47.999372   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:47.999396   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:47.999411   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:47.999423   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:47.999430   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:47.999436   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:47.999445   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:47.999468   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:47.999481   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:47.999499   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:47.999514   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:47.999528   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:47.999535   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:47.999548   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:47.999555   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:47.999563   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:47.999571   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:50.001403   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 15
	I1105 10:42:50.001423   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:50.001491   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:50.002436   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:50.002515   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:50.002523   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:50.002531   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:50.002536   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:50.002543   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:50.002551   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:50.002558   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:50.002564   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:50.002572   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:50.002578   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:50.002585   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:50.002591   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:50.002605   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:50.002617   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:50.002627   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:50.002636   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:50.002644   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:50.002651   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:50.002659   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:50.002679   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:52.003400   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 16
	I1105 10:42:52.003413   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:52.003474   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:52.004411   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:52.004515   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:52.004528   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:52.004541   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:52.004549   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:52.004555   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:52.004563   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:52.004570   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:52.004584   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:52.004592   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:52.004599   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:52.004612   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:52.004623   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:52.004640   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:52.004651   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:52.004659   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:52.004667   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:52.004674   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:52.004681   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:52.004700   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:52.004714   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:54.006134   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 17
	I1105 10:42:54.006149   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:54.006198   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:54.007222   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:54.007320   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:54.007330   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:54.007345   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:54.007351   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:54.007357   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:54.007362   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:54.007382   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:54.007402   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:54.007416   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:54.007425   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:54.007432   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:54.007439   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:54.007447   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:54.007454   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:54.007460   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:54.007468   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:54.007475   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:54.007481   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:54.007490   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:54.007498   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:56.009574   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 18
	I1105 10:42:56.009590   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:56.009615   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:56.010572   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:56.010628   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:56.010636   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:56.010654   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:56.010668   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:56.010676   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:56.010682   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:56.010688   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:56.010695   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:56.010701   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:56.010706   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:56.010714   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:56.010722   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:56.010738   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:56.010751   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:56.010759   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:56.010765   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:56.010828   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:56.010854   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:56.010863   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:56.010871   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:42:58.011098   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 19
	I1105 10:42:58.011114   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:42:58.011188   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:42:58.012115   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:42:58.012204   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:42:58.012221   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:42:58.012229   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:42:58.012238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:42:58.012245   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:42:58.012252   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:42:58.012260   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:42:58.012269   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:42:58.012277   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:42:58.012285   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:42:58.012291   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:42:58.012297   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:42:58.012303   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:42:58.012310   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:42:58.012318   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:42:58.012325   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:42:58.012332   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:42:58.012348   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:42:58.012360   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:42:58.012369   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:00.014296   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 20
	I1105 10:43:00.014308   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:00.014380   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:00.015356   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:00.015426   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:00.015436   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:00.015443   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:00.015449   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:00.015456   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:00.015465   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:00.015482   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:00.015494   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:00.015503   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:00.015511   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:00.015527   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:00.015538   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:00.015546   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:00.015551   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:00.015571   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:00.015585   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:00.015593   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:00.015599   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:00.015605   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:00.015613   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:02.017486   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 21
	I1105 10:43:02.017508   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:02.017549   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:02.018489   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:02.018592   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:02.018603   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:02.018611   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:02.018616   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:02.018627   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:02.018633   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:02.018639   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:02.018649   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:02.018656   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:02.018662   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:02.018669   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:02.018676   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:02.018682   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:02.018687   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:02.018699   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:02.018706   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:02.018715   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:02.018722   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:02.018731   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:02.018738   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:04.020813   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 22
	I1105 10:43:04.020828   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:04.020866   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:04.021816   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:04.021909   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:04.021919   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:04.021928   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:04.021934   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:04.021941   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:04.021947   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:04.021953   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:04.021958   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:04.021964   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:04.021970   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:04.021979   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:04.021995   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:04.022002   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:04.022010   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:04.022017   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:04.022024   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:04.022031   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:04.022039   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:04.022047   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:04.022054   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:06.024144   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 23
	I1105 10:43:06.024157   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:06.024192   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:06.025161   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:06.025238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:06.025248   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:06.025257   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:06.025262   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:06.025269   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:06.025276   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:06.025292   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:06.025307   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:06.025314   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:06.025321   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:06.025327   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:06.025332   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:06.025338   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:06.025344   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:06.025349   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:06.025354   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:06.025360   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:06.025366   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:06.025372   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:06.025377   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:08.027479   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 24
	I1105 10:43:08.027492   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:08.027501   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:08.028501   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:08.028557   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:08.028584   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:08.028598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:08.028615   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:08.028623   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:08.028630   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:08.028636   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:08.028642   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:08.028648   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:08.028666   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:08.028681   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:08.028689   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:08.028696   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:08.028711   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:08.028725   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:08.028739   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:08.028756   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:08.028764   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:08.028775   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:08.028784   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:10.029806   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 25
	I1105 10:43:10.029829   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:10.029895   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:10.030844   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:10.030931   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:10.030940   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:10.030948   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:10.030953   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:10.030959   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:10.030970   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:10.030979   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:10.030995   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:10.031003   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:10.031011   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:10.031017   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:10.031023   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:10.031031   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:10.031040   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:10.031046   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:10.031052   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:10.031057   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:10.031064   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:10.031072   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:10.031081   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:12.033099   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 26
	I1105 10:43:12.033111   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:12.033155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:12.034108   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:12.034186   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:12.034197   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:12.034211   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:12.034220   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:12.034231   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:12.034241   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:12.034248   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:12.034253   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:12.034262   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:12.034267   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:12.034282   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:12.034296   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:12.034312   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:12.034321   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:12.034335   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:12.034346   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:12.034355   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:12.034363   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:12.034377   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:12.034385   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:14.036436   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 27
	I1105 10:43:14.036449   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:14.036512   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:14.037456   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:14.037547   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:14.037561   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:14.037578   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:14.037584   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:14.037592   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:14.037598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:14.037604   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:14.037610   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:14.037616   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:14.037623   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:14.037637   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:14.037649   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:14.037663   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:14.037671   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:14.037678   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:14.037684   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:14.037698   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:14.037711   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:14.037726   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:14.037736   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:16.037753   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 28
	I1105 10:43:16.037765   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:16.037848   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:16.038775   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:16.038869   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:16.038880   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:16.038888   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:16.038893   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:16.038899   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:16.038926   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:16.038934   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:16.038943   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:16.038957   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:16.038965   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:16.038972   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:16.038980   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:16.038986   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:16.038995   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:16.039003   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:16.039011   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:16.039017   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:16.039025   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:16.039031   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:16.039038   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:18.041101   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 29
	I1105 10:43:18.041115   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:18.041147   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:18.042108   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 6a:8a:d9:be:c4:d1 in /var/db/dhcpd_leases ...
	I1105 10:43:18.042181   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:43:18.042201   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:43:18.042226   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:43:18.042239   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:43:18.042248   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:43:18.042261   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:43:18.042268   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:43:18.042277   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:43:18.042284   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:43:18.042292   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:43:18.042299   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:43:18.042304   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:43:18.042310   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:43:18.042318   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:43:18.042326   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:43:18.042333   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:43:18.042346   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:43:18.042356   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:43:18.042365   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:43:18.042372   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:43:20.043235   22854 client.go:171] duration metric: took 1m0.798974008s to LocalClient.Create
	I1105 10:43:22.045367   22854 start.go:128] duration metric: took 1m2.835056759s to createHost
	I1105 10:43:22.045384   22854 start.go:83] releasing machines lock for "force-systemd-env-817000", held for 1m2.835164762s
	W1105 10:43:22.045414   22854 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:8a:d9:be:c4:d1
	I1105 10:43:22.045827   22854 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:43:22.045852   22854 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:43:22.056925   22854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60787
	I1105 10:43:22.057278   22854 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:43:22.057621   22854 main.go:141] libmachine: Using API Version  1
	I1105 10:43:22.057636   22854 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:43:22.057846   22854 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:43:22.058197   22854 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:43:22.058219   22854 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:43:22.069076   22854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60789
	I1105 10:43:22.069408   22854 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:43:22.069739   22854 main.go:141] libmachine: Using API Version  1
	I1105 10:43:22.069749   22854 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:43:22.069974   22854 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:43:22.070089   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .GetState
	I1105 10:43:22.070190   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.070239   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:22.071382   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .DriverName
	I1105 10:43:22.093029   22854 out.go:177] * Deleting "force-systemd-env-817000" in hyperkit ...
	I1105 10:43:22.135038   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .Remove
	I1105 10:43:22.135207   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.135217   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.135287   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:22.136434   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:22.136497   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | waiting for graceful shutdown
	I1105 10:43:23.138669   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:23.138792   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:23.139953   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | waiting for graceful shutdown
	I1105 10:43:24.141715   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:24.141766   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:24.143032   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | waiting for graceful shutdown
	I1105 10:43:25.145038   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:25.145104   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:25.145933   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | waiting for graceful shutdown
	I1105 10:43:26.146569   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:26.146651   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:26.147807   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | waiting for graceful shutdown
	I1105 10:43:27.149935   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:43:27.150003   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22873
	I1105 10:43:27.150723   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | sending sigkill
	I1105 10:43:27.150732   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W1105 10:43:27.165470   22854 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:8a:d9:be:c4:d1
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:8a:d9:be:c4:d1
	I1105 10:43:27.165483   22854 start.go:729] Will try again in 5 seconds ...
	I1105 10:43:27.177994   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:43:27 WARN : hyperkit: failed to read stdout: EOF
	I1105 10:43:27.178009   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:43:27 WARN : hyperkit: failed to read stderr: EOF
	I1105 10:43:32.166219   22854 start.go:360] acquireMachinesLock for force-systemd-env-817000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:44:24.907666   22854 start.go:364] duration metric: took 52.740466985s to acquireMachinesLock for "force-systemd-env-817000"
	I1105 10:44:24.907711   22854 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-817000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-817000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:44:24.907772   22854 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:44:24.929177   22854 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1105 10:44:24.929261   22854 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:44:24.929287   22854 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:44:24.940472   22854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60793
	I1105 10:44:24.940808   22854 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:44:24.941187   22854 main.go:141] libmachine: Using API Version  1
	I1105 10:44:24.941210   22854 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:44:24.941422   22854 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:44:24.941533   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .GetMachineName
	I1105 10:44:24.941635   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .DriverName
	I1105 10:44:24.941765   22854 start.go:159] libmachine.API.Create for "force-systemd-env-817000" (driver="hyperkit")
	I1105 10:44:24.941787   22854 client.go:168] LocalClient.Create starting
	I1105 10:44:24.941815   22854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:44:24.941876   22854 main.go:141] libmachine: Decoding PEM data...
	I1105 10:44:24.941888   22854 main.go:141] libmachine: Parsing certificate...
	I1105 10:44:24.941928   22854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:44:24.941974   22854 main.go:141] libmachine: Decoding PEM data...
	I1105 10:44:24.941985   22854 main.go:141] libmachine: Parsing certificate...
	I1105 10:44:24.941997   22854 main.go:141] libmachine: Running pre-create checks...
	I1105 10:44:24.942002   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .PreCreateCheck
	I1105 10:44:24.942080   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:24.942140   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .GetConfigRaw
	I1105 10:44:24.971152   22854 main.go:141] libmachine: Creating machine...
	I1105 10:44:24.971161   22854 main.go:141] libmachine: (force-systemd-env-817000) Calling .Create
	I1105 10:44:24.971274   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:24.971412   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:44:24.971240   22905 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:44:24.971460   22854 main.go:141] libmachine: (force-systemd-env-817000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:44:25.336973   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:44:25.336880   22905 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/id_rsa...
	I1105 10:44:25.577434   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:44:25.577342   22905 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk...
	I1105 10:44:25.577448   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Writing magic tar header
	I1105 10:44:25.577457   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Writing SSH key tar header
	I1105 10:44:25.578027   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | I1105 10:44:25.577992   22905 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000 ...
	I1105 10:44:25.960365   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:25.960394   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid
	I1105 10:44:25.960406   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Using UUID 4684277d-875b-4a57-8c0c-2b76ea583bd4
	I1105 10:44:25.985087   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Generated MAC 46:be:89:4d:6b:b2
	I1105 10:44:25.985117   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000
	I1105 10:44:25.985155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4684277d-875b-4a57-8c0c-2b76ea583bd4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:44:25.985195   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"4684277d-875b-4a57-8c0c-2b76ea583bd4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[
]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:44:25.985242   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "4684277d-875b-4a57-8c0c-2b76ea583bd4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/for
ce-systemd-env-817000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000"}
	I1105 10:44:25.985292   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 4684277d-875b-4a57-8c0c-2b76ea583bd4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/force-systemd-env-817000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/bzimage,/Users/jenkins/minikube-integrat
ion/19910-17277/.minikube/machines/force-systemd-env-817000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-817000"
	I1105 10:44:25.985304   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:44:25.988297   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 DEBUG: hyperkit: Pid is 22915
	I1105 10:44:25.988865   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 0
	I1105 10:44:25.988878   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:25.988890   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:25.990003   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:25.990067   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:25.990092   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:25.990111   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:25.990124   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:25.990139   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:25.990154   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:25.990164   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:25.990189   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:25.990214   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:25.990229   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:25.990238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:25.990248   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:25.990264   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:25.990271   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:25.990277   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:25.990283   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:25.990291   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:25.990299   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:25.990319   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:25.990338   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:25.999184   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:25 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:44:26.008356   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/force-systemd-env-817000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:44:26.009203   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:44:26.009226   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:44:26.009238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:44:26.009249   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:44:26.398039   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:44:26.398054   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:44:26.512967   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:44:26.512989   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:44:26.513001   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:44:26.513012   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:44:26.513876   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:44:26.513887   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:44:27.994649   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 1
	I1105 10:44:27.994666   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:27.994706   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:27.995701   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:27.995808   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:27.995819   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:27.995825   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:27.995845   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:27.995857   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:27.995875   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:27.995882   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:27.995892   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:27.995900   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:27.995908   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:27.995915   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:27.995921   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:27.995938   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:27.995951   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:27.995959   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:27.995967   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:27.995973   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:27.995983   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:27.995991   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:27.995999   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:30.000035   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 2
	I1105 10:44:30.000052   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:30.000182   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:30.001121   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:30.001259   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:30.001272   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:30.001280   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:30.001288   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:30.001295   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:30.001303   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:30.001315   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:30.001322   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:30.001337   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:30.001350   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:30.001357   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:30.001364   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:30.001373   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:30.001382   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:30.001389   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:30.001397   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:30.001406   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:30.001412   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:30.001418   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:30.001426   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:31.880550   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I1105 10:44:31.880636   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I1105 10:44:31.880645   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I1105 10:44:31.900315   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | 2024/11/05 10:44:31 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I1105 10:44:32.006304   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 3
	I1105 10:44:32.006330   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:32.006574   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:32.008325   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:32.008529   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:32.008542   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:32.008557   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:32.008593   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:32.008608   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:32.008619   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:32.008629   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:32.008640   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:32.008649   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:32.008660   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:32.008668   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:32.008678   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:32.008686   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:32.008697   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:32.008707   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:32.008718   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:32.008730   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:32.008739   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:32.008749   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:32.008759   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:34.012253   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 4
	I1105 10:44:34.012271   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:34.012349   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:34.013323   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:34.013429   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:34.013441   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:34.013464   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:34.013475   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:34.013484   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:34.013493   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:34.013521   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:34.013546   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:34.013563   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:34.013573   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:34.013580   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:34.013588   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:34.013595   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:34.013601   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:34.013607   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:34.013614   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:34.013621   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:34.013637   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:34.013648   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:34.013658   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:36.016120   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 5
	I1105 10:44:36.016134   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:36.016191   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:36.017158   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:36.017231   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:36.017240   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:36.017253   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:36.017263   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:36.017271   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:36.017280   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:36.017296   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:36.017306   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:36.017316   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:36.017324   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:36.017330   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:36.017337   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:36.017343   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:36.017349   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:36.017358   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:36.017370   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:36.017378   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:36.017399   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:36.017406   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:36.017414   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:38.019816   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 6
	I1105 10:44:38.019830   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:38.019866   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:38.020834   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:38.020912   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:38.020923   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:38.020951   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:38.020962   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:38.020987   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:38.021000   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:38.021009   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:38.021016   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:38.021039   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:38.021049   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:38.021068   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:38.021080   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:38.021088   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:38.021094   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:38.021108   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:38.021117   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:38.021124   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:38.021130   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:38.021142   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:38.021155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:40.024794   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 7
	I1105 10:44:40.024813   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:40.024840   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:40.025892   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:40.025955   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:40.025962   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:40.025971   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:40.025976   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:40.025982   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:40.025987   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:40.025993   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:40.025999   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:40.026006   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:40.026014   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:40.026026   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:40.026035   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:40.026044   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:40.026056   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:40.026065   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:40.026074   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:40.026080   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:40.026086   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:40.026094   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:40.026102   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:42.029612   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 8
	I1105 10:44:42.029626   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:42.029691   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:42.030628   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:42.030717   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:42.030726   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:42.030735   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:42.030741   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:42.030747   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:42.030763   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:42.030771   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:42.030777   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:42.030798   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:42.030812   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:42.030819   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:42.030827   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:42.030840   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:42.030849   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:42.030858   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:42.030865   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:42.030872   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:42.030879   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:42.030886   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:42.030893   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:44.032950   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 9
	I1105 10:44:44.032965   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:44.033000   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:44.033939   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:44.034047   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:44.034055   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:44.034063   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:44.034075   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:44.034082   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:44.034088   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:44.034095   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:44.034102   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:44.034127   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:44.034137   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:44.034145   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:44.034153   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:44.034160   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:44.034167   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:44.034182   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:44.034195   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:44.034202   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:44.034208   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:44.034214   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:44.034222   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:46.037343   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 10
	I1105 10:44:46.037356   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:46.037439   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:46.038402   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:46.038473   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:46.038483   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:46.038491   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:46.038497   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:46.038513   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:46.038522   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:46.038529   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:46.038534   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:46.038540   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:46.038551   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:46.038561   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:46.038568   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:46.038574   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:46.038591   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:46.038602   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:46.038618   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:46.038630   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:46.038638   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:46.038645   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:46.038663   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:48.041463   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 11
	I1105 10:44:48.041478   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:48.041549   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:48.042514   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:48.042568   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:48.042578   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:48.042586   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:48.042592   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:48.042602   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:48.042612   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:48.042623   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:48.042648   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:48.042671   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:48.042681   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:48.042695   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:48.042703   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:48.042710   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:48.042717   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:48.042724   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:48.042730   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:48.042741   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:48.042752   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:48.042759   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:48.042767   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:50.043698   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 12
	I1105 10:44:50.043714   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:50.043755   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:50.044698   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:50.044779   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:50.044790   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:50.044819   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:50.044829   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:50.044837   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:50.044844   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:50.044850   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:50.044856   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:50.044866   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:50.044879   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:50.044886   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:50.044894   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:50.044902   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:50.044909   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:50.044916   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:50.044924   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:50.044930   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:50.044939   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:50.044946   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:50.044959   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:52.047795   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 13
	I1105 10:44:52.047818   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:52.047841   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:52.048825   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:52.048904   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:52.048923   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:52.048931   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:52.048936   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:52.048951   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:52.048961   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:52.048969   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:52.048979   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:52.048988   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:52.049000   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:52.049013   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:52.049021   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:52.049029   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:52.049036   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:52.049041   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:52.049061   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:52.049073   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:52.049082   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:52.049089   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:52.049102   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:54.051439   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 14
	I1105 10:44:54.051455   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:54.051502   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:54.052444   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:54.052513   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:54.052522   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:54.052530   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:54.052537   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:54.052554   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:54.052561   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:54.052567   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:54.052580   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:54.052589   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:54.052598   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:54.052609   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:54.052617   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:54.052626   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:54.052633   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:54.052640   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:54.052647   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:54.052654   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:54.052659   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:54.052672   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:54.052684   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:56.054305   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 15
	I1105 10:44:56.054320   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:56.054403   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:56.055342   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:56.055420   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:56.055428   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:56.055437   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:56.055442   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:56.055448   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:56.055454   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:56.055468   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:56.055474   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:56.055480   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:56.055486   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:56.055493   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:56.055498   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:56.055509   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:56.055522   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:56.055529   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:56.055535   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:56.055546   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:56.055553   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:56.055561   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:56.055568   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:44:58.058160   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 16
	I1105 10:44:58.058173   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:44:58.058241   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:44:58.059176   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:44:58.059283   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:44:58.059293   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:44:58.059302   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:44:58.059310   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:44:58.059317   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:44:58.059322   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:44:58.059329   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:44:58.059334   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:44:58.059341   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:44:58.059348   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:44:58.059363   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:44:58.059370   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:44:58.059379   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:44:58.059386   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:44:58.059392   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:44:58.059399   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:44:58.059408   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:44:58.059416   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:44:58.059433   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:44:58.059444   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:00.061970   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 17
	I1105 10:45:00.061988   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:00.062026   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:00.062966   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:00.063061   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:00.063071   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:00.063079   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:00.063088   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:00.063104   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:00.063112   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:00.063118   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:00.063124   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:00.063137   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:00.063151   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:00.063161   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:00.063169   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:00.063184   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:00.063195   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:00.063207   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:00.063213   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:00.063218   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:00.063225   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:00.063231   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:00.063238   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:02.064980   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 18
	I1105 10:45:02.064992   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:02.065062   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:02.066014   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:02.066088   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:02.066108   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:02.066117   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:02.066125   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:02.066131   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:02.066138   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:02.066146   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:02.066152   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:02.066159   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:02.066166   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:02.066171   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:02.066178   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:02.066184   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:02.066191   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:02.066198   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:02.066207   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:02.066216   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:02.066227   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:02.066244   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:02.066255   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:04.068516   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 19
	I1105 10:45:04.068532   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:04.068578   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:04.069546   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:04.069609   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:04.069621   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:04.069635   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:04.069641   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:04.069649   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:04.069657   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:04.069665   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:04.069672   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:04.069690   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:04.069702   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:04.069710   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:04.069717   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:04.069733   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:04.069755   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:04.069769   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:04.069778   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:04.069785   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:04.069792   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:04.069798   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:04.069806   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:06.071052   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 20
	I1105 10:45:06.071067   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:06.071143   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:06.072111   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:06.072185   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:06.072200   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:06.072213   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:06.072222   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:06.072229   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:06.072237   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:06.072243   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:06.072249   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:06.072261   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:06.072278   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:06.072288   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:06.072305   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:06.072317   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:06.072327   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:06.072334   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:06.072342   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:06.072349   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:06.072365   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:06.072376   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:06.072385   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:08.073421   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 21
	I1105 10:45:08.073434   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:08.073496   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:08.074472   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:08.074528   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:08.074547   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:08.074559   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:08.074569   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:08.074579   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:08.074588   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:08.074597   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:08.074607   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:08.074622   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:08.074641   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:08.074657   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:08.074668   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:08.074677   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:08.074685   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:08.074692   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:08.074715   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:08.074724   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:08.074731   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:08.074739   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:08.074746   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:10.075076   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 22
	I1105 10:45:10.075089   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:10.075124   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:10.076097   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:10.076143   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:10.076155   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:10.076179   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:10.076195   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:10.076211   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:10.076218   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:10.076225   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:10.076232   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:10.076249   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:10.076268   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:10.076278   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:10.076285   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:10.076300   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:10.076309   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:10.076317   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:10.076322   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:10.076327   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:10.076334   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:10.076341   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:10.076349   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:12.078605   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 23
	I1105 10:45:12.078617   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:12.078654   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:12.079641   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:12.079712   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:12.079722   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:12.079731   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:12.079738   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:12.079745   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:12.079751   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:12.079762   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:12.079769   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:12.079775   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:12.079784   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:12.079791   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:12.079800   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:12.079815   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:12.079825   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:12.079841   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:12.079849   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:12.079862   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:12.079869   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:12.079877   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:12.079885   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:14.081552   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 24
	I1105 10:45:14.081564   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:14.081573   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:14.082548   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:14.082640   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:14.082653   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:14.082665   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:14.082675   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:14.082682   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:14.082688   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:14.082695   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:14.082710   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:14.082721   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:14.082729   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:14.082737   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:14.082744   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:14.082765   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:14.082774   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:14.082782   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:14.082788   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:14.082799   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:14.082812   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:14.082826   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:14.082835   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:16.083681   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 25
	I1105 10:45:16.083694   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:16.083743   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:16.084696   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:16.084800   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:16.084812   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:16.084828   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:16.084836   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:16.084843   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:16.084849   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:16.084857   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:16.084864   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:16.084875   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:16.084881   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:16.084887   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:16.084893   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:16.084901   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:16.084909   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:16.084917   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:16.084924   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:16.084932   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:16.084939   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:16.084946   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:16.084961   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:18.085973   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 26
	I1105 10:45:18.085989   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:18.086024   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:18.086978   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:18.087076   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:18.087087   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:18.087095   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:18.087103   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:18.087113   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:18.087125   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:18.087141   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:18.087153   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:18.087170   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:18.087178   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:18.087185   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:18.087194   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:18.087201   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:18.087206   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:18.087213   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:18.087219   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:18.087224   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:18.087230   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:18.087237   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:18.087243   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:20.088645   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 27
	I1105 10:45:20.088660   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:20.088703   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:20.089642   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:20.089751   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:20.089786   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:20.089796   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:20.089806   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:20.089825   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:20.089838   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:20.089851   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:20.089863   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:20.089873   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:20.089884   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:20.089892   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:20.089902   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:20.089911   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:20.089927   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:20.089935   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:20.089943   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:20.089950   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:20.089955   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:20.089963   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:20.089971   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:22.092066   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 28
	I1105 10:45:22.092080   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:22.092149   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:22.093087   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:22.093197   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:22.093206   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:22.093213   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:22.093221   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:22.093228   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:22.093233   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:22.093239   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:22.093245   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:22.093260   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:22.093273   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:22.093285   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:22.093294   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:22.093301   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:22.093314   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:22.093321   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:22.093329   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:22.093343   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:22.093355   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:22.093365   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:22.093373   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:24.095525   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Attempt 29
	I1105 10:45:24.095539   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:45:24.095561   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | hyperkit pid from json: 22915
	I1105 10:45:24.096530   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Searching for 46:be:89:4d:6b:b2 in /var/db/dhcpd_leases ...
	I1105 10:45:24.096613   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1105 10:45:24.096622   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:45:24.096629   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:45:24.096635   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:45:24.096642   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:45:24.096648   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:45:24.096664   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:45:24.096676   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:45:24.096684   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:45:24.096693   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:45:24.096702   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:45:24.096710   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:45:24.096717   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:45:24.096730   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:45:24.096739   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:45:24.096746   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:45:24.096754   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:45:24.096777   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:45:24.096793   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:45:24.096804   22854 main.go:141] libmachine: (force-systemd-env-817000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:45:26.097028   22854 client.go:171] duration metric: took 1m1.124116275s to LocalClient.Create
	I1105 10:45:28.099265   22854 start.go:128] duration metric: took 1m3.160243855s to createHost
	I1105 10:45:28.099283   22854 start.go:83] releasing machines lock for "force-systemd-env-817000", held for 1m3.16035705s
	W1105 10:45:28.099381   22854 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-817000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:be:89:4d:6b:b2
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-817000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:be:89:4d:6b:b2
	I1105 10:45:28.161467   22854 out.go:201] 
	W1105 10:45:28.182712   22854 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:be:89:4d:6b:b2
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:be:89:4d:6b:b2
	W1105 10:45:28.182729   22854 out.go:270] * 
	* 
	W1105 10:45:28.183413   22854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:45:28.245641   22854 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-817000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-817000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-817000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (197.523311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-817000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-817000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-11-05 10:45:28.571983 -0800 PST m=+3905.092977116
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-817000 -n force-systemd-env-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-817000 -n force-systemd-env-817000: exit status 7 (100.164123ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:45:28.669667   22944 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:45:28.669690   22944 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-817000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-817000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-817000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-817000: (5.277776988s)
--- FAIL: TestForceSystemdEnv (233.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (130.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 node start m02 -v=7 --alsologtostderr
E1105 10:05:55.936712   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:06:31.122029   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 node start m02 -v=7 --alsologtostderr: exit status 90 (1m17.832672085s)

                                                
                                                
-- stdout --
	* Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	* Restarting existing hyperkit VM for "ha-213000-m02" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:05:48.565358   20256 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:05:48.566266   20256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:05:48.566273   20256 out.go:358] Setting ErrFile to fd 2...
	I1105 10:05:48.566277   20256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:05:48.566470   20256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:05:48.566821   20256 mustload.go:65] Loading cluster: ha-213000
	I1105 10:05:48.567147   20256 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:05:48.567518   20256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:48.567573   20256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:48.578560   20256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58398
	I1105 10:05:48.578963   20256 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:48.579398   20256 main.go:141] libmachine: Using API Version  1
	I1105 10:05:48.579414   20256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:48.579623   20256 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:48.579723   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:05:48.579826   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:48.579886   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:05:48.581019   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
	W1105 10:05:48.581062   20256 host.go:58] "ha-213000-m02" host status: Stopped
	I1105 10:05:48.601718   20256 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:05:48.622577   20256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:05:48.622640   20256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:05:48.622662   20256 cache.go:56] Caching tarball of preloaded images
	I1105 10:05:48.622870   20256 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:05:48.622884   20256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:05:48.623048   20256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:05:48.623810   20256 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:05:48.623986   20256 start.go:364] duration metric: took 89.508µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:05:48.624007   20256 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:05:48.624019   20256 fix.go:54] fixHost starting: m02
	I1105 10:05:48.624298   20256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:48.624315   20256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:48.635427   20256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58400
	I1105 10:05:48.635807   20256 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:48.636157   20256 main.go:141] libmachine: Using API Version  1
	I1105 10:05:48.636191   20256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:48.636402   20256 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:48.636516   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:05:48.636622   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:05:48.636709   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:48.636802   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:05:48.638013   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
	I1105 10:05:48.638061   20256 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:05:48.638087   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:05:48.638201   20256 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:05:48.674525   20256 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:05:48.711624   20256 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:05:48.711866   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:48.712013   20256 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:05:48.713879   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
	I1105 10:05:48.713893   20256 main.go:141] libmachine: (ha-213000-m02) DBG | pid 19738 is in state "Stopped"
	I1105 10:05:48.713920   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:05:48.714445   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:05:48.737867   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:05:48.737896   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:05:48.738073   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000423290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:05:48.738112   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000423290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:05:48.738166   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:05:48.738207   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:05:48.738227   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:05:48.739774   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Pid is 20260
	I1105 10:05:48.740222   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:05:48.740241   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:48.740308   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:05:48.742010   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:05:48.742153   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:05:48.742166   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6c50}
	I1105 10:05:48.742179   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a6bfc}
	I1105 10:05:48.742193   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:05:48.742202   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:05:48.742213   20256 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:05:48.742277   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:05:48.743333   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:05:48.743578   20256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:05:48.744132   20256 machine.go:93] provisionDockerMachine start ...
	I1105 10:05:48.744144   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:05:48.744314   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:05:48.744444   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:05:48.744575   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:05:48.744733   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:05:48.744923   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:05:48.745188   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:05:48.745474   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:05:48.745487   20256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:05:48.752152   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:05:48.761882   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:05:48.763094   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:05:48.763122   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:05:48.763133   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:05:48.763148   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:05:49.183042   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:05:49.183062   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:05:49.297871   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:05:49.297901   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:05:49.297909   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:05:49.297915   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:05:49.298744   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:05:49.298765   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:05:55.045878   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:05:55.045934   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:05:55.045945   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:05:55.072103   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:06:01.902321   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:06:01.902334   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:06:01.902471   20256 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:06:01.902479   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:06:01.902590   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:01.902679   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:01.902772   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:01.902849   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:01.902946   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:01.903089   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:01.903231   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:01.903240   20256 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:06:01.966084   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:06:01.966104   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:01.966245   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:01.966360   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:01.966460   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:01.966556   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:01.966713   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:01.966850   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:01.966861   20256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:06:02.024689   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:06:02.024711   20256 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:06:02.024734   20256 buildroot.go:174] setting up certificates
	I1105 10:06:02.024744   20256 provision.go:84] configureAuth start
	I1105 10:06:02.024752   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:06:02.024890   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:06:02.024981   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:02.025072   20256 provision.go:143] copyHostCerts
	I1105 10:06:02.025106   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:06:02.025184   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:06:02.025191   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:06:02.025991   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:06:02.026202   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:06:02.026252   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:06:02.026257   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:06:02.026354   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:06:02.026514   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:06:02.026568   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:06:02.026573   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:06:02.026659   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:06:02.026826   20256 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:06:02.323583   20256 provision.go:177] copyRemoteCerts
	I1105 10:06:02.323661   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:06:02.323678   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:02.323837   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:02.323933   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.324017   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:02.324099   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:06:02.356407   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:06:02.356496   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:06:02.375548   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:06:02.375636   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:06:02.394837   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:06:02.394913   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:06:02.414976   20256 provision.go:87] duration metric: took 390.220119ms to configureAuth
	I1105 10:06:02.414991   20256 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:06:02.415153   20256 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:06:02.415168   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:02.415316   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:02.415398   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:02.415493   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.415566   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.415650   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:02.415760   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:02.415878   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:02.415885   20256 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:06:02.467599   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:06:02.467611   20256 buildroot.go:70] root file system type: tmpfs
	I1105 10:06:02.467695   20256 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:06:02.467711   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:02.467850   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:02.467935   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.468019   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.468113   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:02.468271   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:02.468414   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:02.468462   20256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:06:02.530766   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:06:02.530790   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:02.530937   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:02.531027   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.531111   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:02.531199   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:02.531328   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:02.531468   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:02.531480   20256 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:06:04.160167   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:06:04.160187   20256 machine.go:96] duration metric: took 15.416186025s to provisionDockerMachine
	I1105 10:06:04.160198   20256 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:06:04.160206   20256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:06:04.160216   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.160420   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:06:04.160432   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:04.160532   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:04.160615   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.160716   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:04.160808   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:06:04.194422   20256 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:06:04.198135   20256 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:06:04.198148   20256 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:06:04.198264   20256 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:06:04.198720   20256 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:06:04.198729   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:06:04.199000   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:06:04.207700   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:06:04.235895   20256 start.go:296] duration metric: took 75.687601ms for postStartSetup
	I1105 10:06:04.235919   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.236126   20256 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:06:04.236140   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:04.236254   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:04.236352   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.236435   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:04.236505   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:06:04.275413   20256 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:06:04.275492   20256 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:06:04.329693   20256 fix.go:56] duration metric: took 15.70580398s for fixHost
	I1105 10:06:04.329716   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:04.329855   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:04.329953   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.330042   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.330140   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:04.330282   20256 main.go:141] libmachine: Using SSH client type: native
	I1105 10:06:04.330427   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:06:04.330434   20256 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:06:04.384429   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829964.670181594
	
	I1105 10:06:04.384451   20256 fix.go:216] guest clock: 1730829964.670181594
	I1105 10:06:04.384462   20256 fix.go:229] Guest: 2024-11-05 10:06:04.670181594 -0800 PST Remote: 2024-11-05 10:06:04.329706 -0800 PST m=+15.805869088 (delta=340.475594ms)
	I1105 10:06:04.384480   20256 fix.go:200] guest clock delta is within tolerance: 340.475594ms
	I1105 10:06:04.384485   20256 start.go:83] releasing machines lock for "ha-213000-m02", held for 15.760633059s
	I1105 10:06:04.384502   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.384643   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:06:04.384750   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.385100   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.385199   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:06:04.385359   20256 ssh_runner.go:195] Run: systemctl --version
	I1105 10:06:04.385370   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:04.385462   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:04.385548   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.385636   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:04.385726   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:06:04.386125   20256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:06:04.386154   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:06:04.386239   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:06:04.386315   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:06:04.386385   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:06:04.386469   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:06:04.416787   20256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:06:04.421521   20256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:06:04.421594   20256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:06:04.467952   20256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:06:04.467969   20256 start.go:495] detecting cgroup driver to use...
	I1105 10:06:04.468118   20256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:06:04.483872   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:06:04.493367   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:06:04.502408   20256 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:06:04.502472   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:06:04.511863   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:06:04.521122   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:06:04.530089   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:06:04.539230   20256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:06:04.548492   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:06:04.557508   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:06:04.567022   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:06:04.576316   20256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:06:04.584439   20256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:06:04.584505   20256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:06:04.595533   20256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:06:04.604183   20256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:06:04.709513   20256 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:06:04.727736   20256 start.go:495] detecting cgroup driver to use...
	I1105 10:06:04.727831   20256 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:06:04.745607   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:06:04.761103   20256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:06:04.781226   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:06:04.792395   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:06:04.803385   20256 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:06:04.826064   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:06:04.836574   20256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:06:04.852019   20256 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:06:04.854975   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:06:04.862257   20256 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:06:04.876193   20256 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:06:04.975277   20256 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:06:05.077722   20256 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:06:05.077813   20256 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:06:05.091772   20256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:06:05.185952   20256 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:07:06.208246   20256 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.022826795s)
	I1105 10:07:06.208340   20256 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:07:06.243596   20256 out.go:201] 
	W1105 10:07:06.279685   20256 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:06:03 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.099536200Z" level=info msg="Starting up"
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100003892Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100560106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.115521347Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132308567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132358114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132406596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132416672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132628271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132663193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132794006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132829321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132841122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132848619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133048469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133441766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134947295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134983072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135091230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135124963Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135453326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135498250Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.138968658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139014556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139027268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139037047Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139045875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139087106Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139248954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139357359Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139397899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139410860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139419925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139428359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139436120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139445180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139455667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139464176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139472008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139479262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139492597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139501736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139517261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139531320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139540562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139548884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139558003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139566476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139579643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139591707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139599492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139607047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139614740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139629471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139645957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139654458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139664126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139690121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139701137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139708757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139716438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139723384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139731153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139738505Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139917381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139977071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140005104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140045992Z" level=info msg="containerd successfully booted in 0.025715s"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.121357875Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.185022502Z" level=info msg="Loading containers: start."
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.310121265Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.376080494Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.418336443Z" level=info msg="Loading containers: done."
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425009209Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425044021Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425060317Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425589655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443754722Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443909983Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:06:04 ha-213000-m02 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.484920310Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485795881Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485837869Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485866356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485902025Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:06:05 ha-213000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:06:06 ha-213000-m02 dockerd[1168]: time="2024-11-05T18:06:06.522761221Z" level=info msg="Starting up"
	Nov 05 18:07:06 ha-213000-m02 dockerd[1168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:06:03 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.099536200Z" level=info msg="Starting up"
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100003892Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100560106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.115521347Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132308567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132358114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132406596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132416672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132628271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132663193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132794006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132829321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132841122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132848619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133048469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133441766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134947295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134983072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135091230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135124963Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135453326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135498250Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.138968658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139014556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139027268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139037047Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139045875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139087106Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139248954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139357359Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139397899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139410860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139419925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139428359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139436120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139445180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139455667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139464176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139472008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139479262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139492597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139501736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139517261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139531320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139540562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139548884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139558003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139566476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139579643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139591707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139599492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139607047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139614740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139629471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139645957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139654458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139664126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139690121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139701137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139708757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139716438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139723384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139731153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139738505Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139917381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139977071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140005104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140045992Z" level=info msg="containerd successfully booted in 0.025715s"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.121357875Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.185022502Z" level=info msg="Loading containers: start."
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.310121265Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.376080494Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.418336443Z" level=info msg="Loading containers: done."
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425009209Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425044021Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425060317Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425589655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443754722Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443909983Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:06:04 ha-213000-m02 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.484920310Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485795881Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485837869Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485866356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485902025Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:06:05 ha-213000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:06:06 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:06:06 ha-213000-m02 dockerd[1168]: time="2024-11-05T18:06:06.522761221Z" level=info msg="Starting up"
	Nov 05 18:07:06 ha-213000-m02 dockerd[1168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:07:06 ha-213000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:07:06.279763   20256 out.go:270] * 
	* 
	W1105 10:07:06.285595   20256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:07:06.306440   20256 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1105 10:05:48.565358   20256 out.go:345] Setting OutFile to fd 1 ...
I1105 10:05:48.566266   20256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:05:48.566273   20256 out.go:358] Setting ErrFile to fd 2...
I1105 10:05:48.566277   20256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:05:48.566470   20256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:05:48.566821   20256 mustload.go:65] Loading cluster: ha-213000
I1105 10:05:48.567147   20256 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:05:48.567518   20256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:05:48.567573   20256 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:05:48.578560   20256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58398
I1105 10:05:48.578963   20256 main.go:141] libmachine: () Calling .GetVersion
I1105 10:05:48.579398   20256 main.go:141] libmachine: Using API Version  1
I1105 10:05:48.579414   20256 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:05:48.579623   20256 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:05:48.579723   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
I1105 10:05:48.579826   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:05:48.579886   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
I1105 10:05:48.581019   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
W1105 10:05:48.581062   20256 host.go:58] "ha-213000-m02" host status: Stopped
I1105 10:05:48.601718   20256 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
I1105 10:05:48.622577   20256 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1105 10:05:48.622640   20256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
I1105 10:05:48.622662   20256 cache.go:56] Caching tarball of preloaded images
I1105 10:05:48.622870   20256 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1105 10:05:48.622884   20256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1105 10:05:48.623048   20256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
I1105 10:05:48.623810   20256 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1105 10:05:48.623986   20256 start.go:364] duration metric: took 89.508µs to acquireMachinesLock for "ha-213000-m02"
I1105 10:05:48.624007   20256 start.go:96] Skipping create...Using existing machine configuration
I1105 10:05:48.624019   20256 fix.go:54] fixHost starting: m02
I1105 10:05:48.624298   20256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:05:48.624315   20256 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:05:48.635427   20256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58400
I1105 10:05:48.635807   20256 main.go:141] libmachine: () Calling .GetVersion
I1105 10:05:48.636157   20256 main.go:141] libmachine: Using API Version  1
I1105 10:05:48.636191   20256 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:05:48.636402   20256 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:05:48.636516   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:05:48.636622   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
I1105 10:05:48.636709   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:05:48.636802   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
I1105 10:05:48.638013   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
I1105 10:05:48.638061   20256 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
I1105 10:05:48.638087   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
W1105 10:05:48.638201   20256 fix.go:138] unexpected machine state, will restart: <nil>
I1105 10:05:48.674525   20256 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
I1105 10:05:48.711624   20256 main.go:141] libmachine: (ha-213000-m02) Calling .Start
I1105 10:05:48.711866   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:05:48.712013   20256 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
I1105 10:05:48.713879   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
I1105 10:05:48.713893   20256 main.go:141] libmachine: (ha-213000-m02) DBG | pid 19738 is in state "Stopped"
I1105 10:05:48.713920   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
I1105 10:05:48.714445   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
I1105 10:05:48.737867   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
I1105 10:05:48.737896   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
I1105 10:05:48.738073   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000423290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I1105 10:05:48.738112   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000423290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I1105 10:05:48.738166   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/m
achines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
I1105 10:05:48.738207   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 con
sole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
I1105 10:05:48.738227   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I1105 10:05:48.739774   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 DEBUG: hyperkit: Pid is 20260
I1105 10:05:48.740222   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
I1105 10:05:48.740241   20256 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:05:48.740308   20256 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
I1105 10:05:48.742010   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
I1105 10:05:48.742153   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
I1105 10:05:48.742166   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6c50}
I1105 10:05:48.742179   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a6bfc}
I1105 10:05:48.742193   20256 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
I1105 10:05:48.742202   20256 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
I1105 10:05:48.742213   20256 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
I1105 10:05:48.742277   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
I1105 10:05:48.743333   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
I1105 10:05:48.743578   20256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
I1105 10:05:48.744132   20256 machine.go:93] provisionDockerMachine start ...
I1105 10:05:48.744144   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:05:48.744314   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:05:48.744444   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:05:48.744575   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:05:48.744733   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:05:48.744923   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:05:48.745188   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:05:48.745474   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:05:48.745487   20256 main.go:141] libmachine: About to run SSH command:
hostname
I1105 10:05:48.752152   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
I1105 10:05:48.761882   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I1105 10:05:48.763094   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I1105 10:05:48.763122   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I1105 10:05:48.763133   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I1105 10:05:48.763148   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:48 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I1105 10:05:49.183042   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I1105 10:05:49.183062   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I1105 10:05:49.297871   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I1105 10:05:49.297901   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I1105 10:05:49.297909   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I1105 10:05:49.297915   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I1105 10:05:49.298744   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I1105 10:05:49.298765   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:49 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I1105 10:05:55.045878   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I1105 10:05:55.045934   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I1105 10:05:55.045945   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I1105 10:05:55.072103   20256 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:05:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I1105 10:06:01.902321   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I1105 10:06:01.902334   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
I1105 10:06:01.902471   20256 buildroot.go:166] provisioning hostname "ha-213000-m02"
I1105 10:06:01.902479   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
I1105 10:06:01.902590   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:01.902679   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:01.902772   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:01.902849   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:01.902946   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:01.903089   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:01.903231   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:01.903240   20256 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
I1105 10:06:01.966084   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02

                                                
                                                
I1105 10:06:01.966104   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:01.966245   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:01.966360   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:01.966460   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:01.966556   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:01.966713   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:01.966850   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:01.966861   20256 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I1105 10:06:02.024689   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I1105 10:06:02.024711   20256 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
I1105 10:06:02.024734   20256 buildroot.go:174] setting up certificates
I1105 10:06:02.024744   20256 provision.go:84] configureAuth start
I1105 10:06:02.024752   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
I1105 10:06:02.024890   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
I1105 10:06:02.024981   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:02.025072   20256 provision.go:143] copyHostCerts
I1105 10:06:02.025106   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
I1105 10:06:02.025184   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
I1105 10:06:02.025191   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
I1105 10:06:02.025991   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
I1105 10:06:02.026202   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
I1105 10:06:02.026252   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
I1105 10:06:02.026257   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
I1105 10:06:02.026354   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
I1105 10:06:02.026514   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
I1105 10:06:02.026568   20256 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
I1105 10:06:02.026573   20256 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
I1105 10:06:02.026659   20256 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
I1105 10:06:02.026826   20256 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
I1105 10:06:02.323583   20256 provision.go:177] copyRemoteCerts
I1105 10:06:02.323661   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1105 10:06:02.323678   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:02.323837   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:02.323933   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.324017   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:02.324099   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
I1105 10:06:02.356407   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1105 10:06:02.356496   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1105 10:06:02.375548   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
I1105 10:06:02.375636   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1105 10:06:02.394837   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1105 10:06:02.394913   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1105 10:06:02.414976   20256 provision.go:87] duration metric: took 390.220119ms to configureAuth
I1105 10:06:02.414991   20256 buildroot.go:189] setting minikube options for container-runtime
I1105 10:06:02.415153   20256 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:06:02.415168   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:02.415316   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:02.415398   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:02.415493   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.415566   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.415650   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:02.415760   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:02.415878   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:02.415885   20256 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1105 10:06:02.467599   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I1105 10:06:02.467611   20256 buildroot.go:70] root file system type: tmpfs
I1105 10:06:02.467695   20256 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1105 10:06:02.467711   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:02.467850   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:02.467935   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.468019   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.468113   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:02.468271   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:02.468414   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:02.468462   20256 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1105 10:06:02.530766   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I1105 10:06:02.530790   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:02.530937   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:02.531027   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.531111   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:02.531199   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:02.531328   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:02.531468   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:02.531480   20256 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1105 10:06:04.160167   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I1105 10:06:04.160187   20256 machine.go:96] duration metric: took 15.416186025s to provisionDockerMachine
I1105 10:06:04.160198   20256 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
I1105 10:06:04.160206   20256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1105 10:06:04.160216   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.160420   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1105 10:06:04.160432   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:04.160532   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:04.160615   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.160716   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:04.160808   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
I1105 10:06:04.194422   20256 ssh_runner.go:195] Run: cat /etc/os-release
I1105 10:06:04.198135   20256 info.go:137] Remote host: Buildroot 2023.02.9
I1105 10:06:04.198148   20256 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
I1105 10:06:04.198264   20256 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
I1105 10:06:04.198720   20256 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
I1105 10:06:04.198729   20256 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
I1105 10:06:04.199000   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1105 10:06:04.207700   20256 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
I1105 10:06:04.235895   20256 start.go:296] duration metric: took 75.687601ms for postStartSetup
I1105 10:06:04.235919   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.236126   20256 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I1105 10:06:04.236140   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:04.236254   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:04.236352   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.236435   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:04.236505   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
I1105 10:06:04.275413   20256 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I1105 10:06:04.275492   20256 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I1105 10:06:04.329693   20256 fix.go:56] duration metric: took 15.70580398s for fixHost
I1105 10:06:04.329716   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:04.329855   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:04.329953   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.330042   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.330140   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:04.330282   20256 main.go:141] libmachine: Using SSH client type: native
I1105 10:06:04.330427   20256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa440620] 0xa443300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
I1105 10:06:04.330434   20256 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1105 10:06:04.384429   20256 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829964.670181594

                                                
                                                
I1105 10:06:04.384451   20256 fix.go:216] guest clock: 1730829964.670181594
I1105 10:06:04.384462   20256 fix.go:229] Guest: 2024-11-05 10:06:04.670181594 -0800 PST Remote: 2024-11-05 10:06:04.329706 -0800 PST m=+15.805869088 (delta=340.475594ms)
I1105 10:06:04.384480   20256 fix.go:200] guest clock delta is within tolerance: 340.475594ms
I1105 10:06:04.384485   20256 start.go:83] releasing machines lock for "ha-213000-m02", held for 15.760633059s
I1105 10:06:04.384502   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.384643   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
I1105 10:06:04.384750   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.385100   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.385199   20256 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
I1105 10:06:04.385359   20256 ssh_runner.go:195] Run: systemctl --version
I1105 10:06:04.385370   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:04.385462   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:04.385548   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.385636   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:04.385726   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
I1105 10:06:04.386125   20256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1105 10:06:04.386154   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
I1105 10:06:04.386239   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
I1105 10:06:04.386315   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
I1105 10:06:04.386385   20256 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
I1105 10:06:04.386469   20256 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
I1105 10:06:04.416787   20256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1105 10:06:04.421521   20256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1105 10:06:04.421594   20256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1105 10:06:04.467952   20256 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1105 10:06:04.467969   20256 start.go:495] detecting cgroup driver to use...
I1105 10:06:04.468118   20256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1105 10:06:04.483872   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1105 10:06:04.493367   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1105 10:06:04.502408   20256 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1105 10:06:04.502472   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1105 10:06:04.511863   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1105 10:06:04.521122   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1105 10:06:04.530089   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1105 10:06:04.539230   20256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1105 10:06:04.548492   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1105 10:06:04.557508   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1105 10:06:04.567022   20256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1105 10:06:04.576316   20256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1105 10:06:04.584439   20256 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:

                                                
                                                
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1105 10:06:04.584505   20256 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1105 10:06:04.595533   20256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1105 10:06:04.604183   20256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1105 10:06:04.709513   20256 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1105 10:06:04.727736   20256 start.go:495] detecting cgroup driver to use...
I1105 10:06:04.727831   20256 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1105 10:06:04.745607   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1105 10:06:04.761103   20256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1105 10:06:04.781226   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1105 10:06:04.792395   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1105 10:06:04.803385   20256 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1105 10:06:04.826064   20256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1105 10:06:04.836574   20256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1105 10:06:04.852019   20256 ssh_runner.go:195] Run: which cri-dockerd
I1105 10:06:04.854975   20256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1105 10:06:04.862257   20256 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I1105 10:06:04.876193   20256 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1105 10:06:04.975277   20256 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1105 10:06:05.077722   20256 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I1105 10:06:05.077813   20256 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1105 10:06:05.091772   20256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1105 10:06:05.185952   20256 ssh_runner.go:195] Run: sudo systemctl restart docker
I1105 10:07:06.208246   20256 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.022826795s)
I1105 10:07:06.208340   20256 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I1105 10:07:06.243596   20256 out.go:201] 
W1105 10:07:06.279685   20256 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Nov 05 18:06:03 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.099536200Z" level=info msg="Starting up"
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100003892Z" level=info msg="containerd not running, starting managed containerd"
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100560106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.115521347Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132308567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132358114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132406596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132416672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132628271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132663193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132794006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132829321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132841122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132848619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133048469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133441766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134947295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134983072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135091230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135124963Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135453326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135498250Z" level=info msg="metadata content store policy set" policy=shared
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.138968658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139014556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139027268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139037047Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139045875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139087106Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139248954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139357359Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139397899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139410860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139419925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139428359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139436120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139445180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139455667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139464176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139472008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139479262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139492597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139501736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139517261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139531320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139540562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139548884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139558003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139566476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139579643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139591707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139599492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139607047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139614740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139629471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139645957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139654458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139664126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139690121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139701137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139708757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139716438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139723384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139731153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139738505Z" level=info msg="NRI interface is disabled by configuration."
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139917381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139977071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140005104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140045992Z" level=info msg="containerd successfully booted in 0.025715s"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.121357875Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.185022502Z" level=info msg="Loading containers: start."
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.310121265Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.376080494Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.418336443Z" level=info msg="Loading containers: done."
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425009209Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425044021Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425060317Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425589655Z" level=info msg="Daemon has completed initialization"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443754722Z" level=info msg="API listen on /var/run/docker.sock"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443909983Z" level=info msg="API listen on [::]:2376"
Nov 05 18:06:04 ha-213000-m02 systemd[1]: Started Docker Application Container Engine.
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.484920310Z" level=info msg="Processing signal 'terminated'"
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485795881Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485837869Z" level=info msg="Daemon shutdown complete"
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485866356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485902025Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Nov 05 18:06:05 ha-213000-m02 systemd[1]: Stopping Docker Application Container Engine...
Nov 05 18:06:06 ha-213000-m02 systemd[1]: docker.service: Deactivated successfully.
Nov 05 18:06:06 ha-213000-m02 systemd[1]: Stopped Docker Application Container Engine.
Nov 05 18:06:06 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
Nov 05 18:06:06 ha-213000-m02 dockerd[1168]: time="2024-11-05T18:06:06.522761221Z" level=info msg="Starting up"
Nov 05 18:07:06 ha-213000-m02 dockerd[1168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 05 18:07:06 ha-213000-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Nov 05 18:06:03 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.099536200Z" level=info msg="Starting up"
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100003892Z" level=info msg="containerd not running, starting managed containerd"
Nov 05 18:06:03 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:03.100560106Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.115521347Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132308567Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132358114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132406596Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132416672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132628271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132663193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132794006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132829321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132841122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.132848619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133048469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.133441766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134947295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.134983072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135091230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135124963Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135453326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.135498250Z" level=info msg="metadata content store policy set" policy=shared
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.138968658Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139014556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139027268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139037047Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139045875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139087106Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139248954Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139357359Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139397899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139410860Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139419925Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139428359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139436120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139445180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139455667Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139464176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139472008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139479262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139492597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139501736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139517261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139531320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139540562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139548884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139558003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139566476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139579643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139591707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139599492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139607047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139614740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139629471Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139645957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139654458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139664126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139690121Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139701137Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139708757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139716438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139723384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139731153Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139738505Z" level=info msg="NRI interface is disabled by configuration."
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139917381Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.139977071Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140005104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Nov 05 18:06:03 ha-213000-m02 dockerd[494]: time="2024-11-05T18:06:03.140045992Z" level=info msg="containerd successfully booted in 0.025715s"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.121357875Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.185022502Z" level=info msg="Loading containers: start."
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.310121265Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.376080494Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.418336443Z" level=info msg="Loading containers: done."
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425009209Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425044021Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425060317Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.425589655Z" level=info msg="Daemon has completed initialization"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443754722Z" level=info msg="API listen on /var/run/docker.sock"
Nov 05 18:06:04 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:04.443909983Z" level=info msg="API listen on [::]:2376"
Nov 05 18:06:04 ha-213000-m02 systemd[1]: Started Docker Application Container Engine.
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.484920310Z" level=info msg="Processing signal 'terminated'"
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485795881Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485837869Z" level=info msg="Daemon shutdown complete"
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485866356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Nov 05 18:06:05 ha-213000-m02 dockerd[487]: time="2024-11-05T18:06:05.485902025Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Nov 05 18:06:05 ha-213000-m02 systemd[1]: Stopping Docker Application Container Engine...
Nov 05 18:06:06 ha-213000-m02 systemd[1]: docker.service: Deactivated successfully.
Nov 05 18:06:06 ha-213000-m02 systemd[1]: Stopped Docker Application Container Engine.
Nov 05 18:06:06 ha-213000-m02 systemd[1]: Starting Docker Application Container Engine...
Nov 05 18:06:06 ha-213000-m02 dockerd[1168]: time="2024-11-05T18:06:06.522761221Z" level=info msg="Starting up"
Nov 05 18:07:06 ha-213000-m02 dockerd[1168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Nov 05 18:07:06 ha-213000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Nov 05 18:07:06 ha-213000-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W1105 10:07:06.279763   20256 out.go:270] * 
* 
W1105 10:07:06.285595   20256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1105 10:07:06.306440   20256 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-amd64 -p ha-213000 node start m02 -v=7 --alsologtostderr": exit status 90
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (482.796192ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:06.413415   20289 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:06.413756   20289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:06.413762   20289 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:06.413766   20289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:06.413940   20289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:06.414141   20289 out.go:352] Setting JSON to false
	I1105 10:07:06.414165   20289 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:06.414207   20289 notify.go:220] Checking for updates...
	I1105 10:07:06.414492   20289 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:06.414520   20289 status.go:174] checking status of ha-213000 ...
	I1105 10:07:06.415941   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.415993   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.427301   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58423
	I1105 10:07:06.427598   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.428005   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.428015   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.428216   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.428312   20289 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:06.428405   20289 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:06.428483   20289 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:06.429662   20289 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:06.429684   20289 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:06.429927   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.429951   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.444558   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58425
	I1105 10:07:06.444876   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.445210   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.445219   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.445481   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.445589   20289 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:06.445679   20289 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:06.445949   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.445996   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.456917   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58427
	I1105 10:07:06.457223   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.457582   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.457600   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.457807   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.457917   20289 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:06.458096   20289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:06.458114   20289 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:06.458206   20289 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:06.458286   20289 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:06.458370   20289 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:06.458480   20289 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:06.497176   20289 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:06.501917   20289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:06.514471   20289 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:06.514495   20289 api_server.go:166] Checking apiserver status ...
	I1105 10:07:06.514551   20289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:06.527283   20289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:06.535637   20289 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:06.535706   20289 ssh_runner.go:195] Run: ls
	I1105 10:07:06.538869   20289 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:06.541935   20289 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:06.541946   20289 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:06.541954   20289 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:06.541965   20289 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:06.542254   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.542277   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.553562   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58431
	I1105 10:07:06.553892   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.554208   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.554218   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.554424   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.554517   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:06.554617   20289 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:06.554705   20289 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:06.555859   20289 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:06.555868   20289 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:06.556128   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.556153   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.567679   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58433
	I1105 10:07:06.568011   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.568340   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.568349   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.568577   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.568688   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:06.568781   20289 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:06.569074   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.569104   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.580448   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58435
	I1105 10:07:06.580788   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.581164   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.581181   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.581429   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.581537   20289 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:06.581706   20289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:06.581718   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:06.581823   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:06.581921   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:06.582080   20289 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:06.582175   20289 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:06.612181   20289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:06.623587   20289 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:06.623601   20289 api_server.go:166] Checking apiserver status ...
	I1105 10:07:06.623659   20289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:06.633821   20289 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:06.633844   20289 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:06.633853   20289 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:06.633871   20289 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:06.634174   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.634198   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.645311   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58438
	I1105 10:07:06.645699   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.646058   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.646069   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.646306   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.646412   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:06.646518   20289 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:06.646607   20289 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:06.647762   20289 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:06.647771   20289 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:06.648035   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.648063   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.659096   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58440
	I1105 10:07:06.659408   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.659754   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.659771   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.659984   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.660077   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:06.660176   20289 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:06.660453   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.660479   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.671524   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58442
	I1105 10:07:06.671834   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.672171   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.672183   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.672417   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.672524   20289 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:06.672673   20289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:06.672686   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:06.672765   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:06.672847   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:06.672932   20289 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:06.673008   20289 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:06.705848   20289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:06.720961   20289 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:06.720975   20289 api_server.go:166] Checking apiserver status ...
	I1105 10:07:06.721031   20289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:06.731955   20289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:06.739734   20289 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:06.739797   20289 ssh_runner.go:195] Run: ls
	I1105 10:07:06.743465   20289 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:06.746579   20289 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:06.746598   20289 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:06.746602   20289 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:06.746612   20289 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:06.746893   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.746919   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.758014   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58446
	I1105 10:07:06.758346   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.758677   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.758688   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.758886   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.758987   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:06.759081   20289 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:06.759171   20289 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:06.760327   20289 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:06.760336   20289 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:06.760577   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.760599   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.771710   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58448
	I1105 10:07:06.772114   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.772492   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.772508   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.772759   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.772892   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:06.772994   20289 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:06.773270   20289 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:06.773297   20289 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:06.784183   20289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58450
	I1105 10:07:06.784502   20289 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:06.784837   20289 main.go:141] libmachine: Using API Version  1
	I1105 10:07:06.784848   20289 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:06.785084   20289 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:06.785181   20289 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:06.785334   20289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:06.785345   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:06.785429   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:06.785507   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:06.785589   20289 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:06.785676   20289 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:06.815533   20289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:06.826572   20289 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:06.831210   17842 retry.go:31] will retry after 1.406092358s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (466.927414ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:08.304505   20305 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:08.304728   20305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:08.304734   20305 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:08.304738   20305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:08.304918   20305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:08.305110   20305 out.go:352] Setting JSON to false
	I1105 10:07:08.305133   20305 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:08.305166   20305 notify.go:220] Checking for updates...
	I1105 10:07:08.305493   20305 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:08.305515   20305 status.go:174] checking status of ha-213000 ...
	I1105 10:07:08.305964   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.306000   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.317344   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58454
	I1105 10:07:08.317638   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.318028   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.318036   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.318308   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.318415   20305 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:08.318505   20305 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:08.318568   20305 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:08.319702   20305 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:08.319719   20305 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:08.320001   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.320025   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.330955   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58456
	I1105 10:07:08.331274   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.331596   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.331614   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.331825   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.331924   20305 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:08.332014   20305 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:08.332261   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.332288   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.343201   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58458
	I1105 10:07:08.343534   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.343845   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.343859   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.344087   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.344183   20305 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:08.344356   20305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:08.344388   20305 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:08.344469   20305 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:08.344549   20305 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:08.344632   20305 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:08.344727   20305 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:08.381711   20305 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:08.386085   20305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:08.398705   20305 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:08.398730   20305 api_server.go:166] Checking apiserver status ...
	I1105 10:07:08.398787   20305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:08.412301   20305 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:08.420509   20305 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:08.420571   20305 ssh_runner.go:195] Run: ls
	I1105 10:07:08.423667   20305 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:08.427877   20305 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:08.427891   20305 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:08.427897   20305 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:08.427907   20305 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:08.428211   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.428234   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.439337   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58462
	I1105 10:07:08.439648   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.439993   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.440007   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.440206   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.440305   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:08.440395   20305 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:08.440464   20305 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:08.441632   20305 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:08.441640   20305 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:08.441893   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.441915   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.452956   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58464
	I1105 10:07:08.453281   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.453646   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.453661   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.453877   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.453969   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:08.454055   20305 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:08.454326   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.454349   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.465287   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58466
	I1105 10:07:08.465606   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.465962   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.465984   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.466228   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.466338   20305 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:08.466501   20305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:08.466513   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:08.466604   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:08.466722   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:08.466836   20305 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:08.466920   20305 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:08.496093   20305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:08.506575   20305 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:08.506589   20305 api_server.go:166] Checking apiserver status ...
	I1105 10:07:08.506643   20305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:08.516828   20305 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:08.516839   20305 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:08.516848   20305 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:08.516858   20305 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:08.517144   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.517168   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.528332   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58469
	I1105 10:07:08.528654   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.528994   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.529008   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.529212   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.529303   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:08.529390   20305 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:08.529454   20305 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:08.530627   20305 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:08.530637   20305 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:08.530905   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.530931   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.541882   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58471
	I1105 10:07:08.542220   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.542556   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.542568   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.542795   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.542900   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:08.543004   20305 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:08.543267   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.543290   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.554266   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58473
	I1105 10:07:08.554574   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.554941   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.554957   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.555175   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.555269   20305 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:08.555408   20305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:08.555427   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:08.555499   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:08.555606   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:08.555679   20305 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:08.555760   20305 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:08.585337   20305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:08.596766   20305 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:08.596780   20305 api_server.go:166] Checking apiserver status ...
	I1105 10:07:08.596835   20305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:08.607800   20305 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:08.615049   20305 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:08.615113   20305 ssh_runner.go:195] Run: ls
	I1105 10:07:08.618497   20305 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:08.621661   20305 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:08.621673   20305 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:08.621678   20305 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:08.621688   20305 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:08.621946   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.621966   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.633427   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58477
	I1105 10:07:08.633767   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.634139   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.634154   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.634415   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.634519   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:08.634622   20305 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:08.634701   20305 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:08.635941   20305 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:08.635952   20305 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:08.636247   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.636407   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.647534   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58479
	I1105 10:07:08.647879   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.648240   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.648252   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.648462   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.648563   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:08.648669   20305 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:08.648934   20305 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:08.648955   20305 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:08.660364   20305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58481
	I1105 10:07:08.660681   20305 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:08.661045   20305 main.go:141] libmachine: Using API Version  1
	I1105 10:07:08.661058   20305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:08.661261   20305 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:08.661361   20305 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:08.661519   20305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:08.661530   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:08.661612   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:08.661698   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:08.661795   20305 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:08.661872   20305 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:08.691696   20305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:08.702915   20305 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:08.706663   17842 retry.go:31] will retry after 901.512915ms: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (469.899255ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:09.675005   20319 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:09.675314   20319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:09.675320   20319 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:09.675324   20319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:09.675512   20319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:09.675693   20319 out.go:352] Setting JSON to false
	I1105 10:07:09.675715   20319 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:09.675763   20319 notify.go:220] Checking for updates...
	I1105 10:07:09.676098   20319 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:09.676121   20319 status.go:174] checking status of ha-213000 ...
	I1105 10:07:09.676599   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.676644   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.688107   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58485
	I1105 10:07:09.688448   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.688843   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.688853   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.689100   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.689209   20319 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:09.689312   20319 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:09.689379   20319 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:09.690558   20319 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:09.690575   20319 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:09.690833   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.690856   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.704432   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58487
	I1105 10:07:09.704765   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.705102   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.705115   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.705330   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.705433   20319 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:09.705544   20319 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:09.705816   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.705838   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.717074   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58489
	I1105 10:07:09.717358   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.717694   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.717708   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.717915   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.718023   20319 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:09.718187   20319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:09.718208   20319 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:09.718290   20319 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:09.718368   20319 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:09.718453   20319 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:09.718533   20319 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:09.752641   20319 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:09.757273   20319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:09.768186   20319 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:09.768210   20319 api_server.go:166] Checking apiserver status ...
	I1105 10:07:09.768260   20319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:09.780378   20319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:09.787870   20319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:09.787951   20319 ssh_runner.go:195] Run: ls
	I1105 10:07:09.791591   20319 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:09.795683   20319 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:09.795695   20319 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:09.795702   20319 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:09.795720   20319 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:09.795980   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.796002   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.807202   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58493
	I1105 10:07:09.807534   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.807931   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.807947   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.808139   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.808236   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:09.808329   20319 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:09.808392   20319 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:09.809583   20319 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:09.809593   20319 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:09.809862   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.809887   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.820933   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58495
	I1105 10:07:09.821365   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.821726   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.821741   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.821943   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.822048   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:09.822139   20319 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:09.822398   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.822420   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.833412   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58497
	I1105 10:07:09.833833   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.834181   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.834200   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.834459   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.834586   20319 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:09.834744   20319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:09.834761   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:09.834861   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:09.834954   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:09.835044   20319 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:09.835123   20319 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:09.864781   20319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:09.875608   20319 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:09.875622   20319 api_server.go:166] Checking apiserver status ...
	I1105 10:07:09.875675   20319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:09.885737   20319 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:09.885749   20319 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:09.885755   20319 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:09.885774   20319 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:09.886062   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.886085   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.897251   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58500
	I1105 10:07:09.897569   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.897912   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.897926   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.898147   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.898230   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:09.898331   20319 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:09.898397   20319 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:09.899607   20319 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:09.899616   20319 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:09.899889   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.899919   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.910928   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58502
	I1105 10:07:09.911244   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.911563   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.911573   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.911783   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.911891   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:09.911997   20319 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:09.912277   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.912299   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:09.923240   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58504
	I1105 10:07:09.923560   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:09.923876   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:09.923887   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:09.924113   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:09.924216   20319 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:09.924370   20319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:09.924387   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:09.924470   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:09.924546   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:09.924637   20319 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:09.924719   20319 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:09.955666   20319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:09.967368   20319 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:09.967383   20319 api_server.go:166] Checking apiserver status ...
	I1105 10:07:09.967431   20319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:09.979345   20319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:09.990089   20319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:09.990162   20319 ssh_runner.go:195] Run: ls
	I1105 10:07:09.993275   20319 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:09.996345   20319 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:09.996357   20319 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:09.996362   20319 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:09.996371   20319 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:09.996624   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:09.996647   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:10.007724   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58508
	I1105 10:07:10.008032   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:10.008382   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:10.008398   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:10.008632   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:10.008745   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:10.008840   20319 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:10.008920   20319 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:10.010130   20319 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:10.010139   20319 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:10.010420   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:10.010446   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:10.021445   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58510
	I1105 10:07:10.021763   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:10.022151   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:10.022169   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:10.022394   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:10.022498   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:10.022594   20319 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:10.022852   20319 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:10.022877   20319 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:10.033873   20319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58512
	I1105 10:07:10.034197   20319 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:10.034521   20319 main.go:141] libmachine: Using API Version  1
	I1105 10:07:10.034539   20319 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:10.034751   20319 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:10.034871   20319 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:10.035022   20319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:10.035037   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:10.035125   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:10.035205   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:10.035285   20319 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:10.035365   20319 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:10.065117   20319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:10.076433   20319 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:10.080025   17842 retry.go:31] will retry after 2.411002638s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (467.806684ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:12.558706   20333 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:12.559011   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:12.559017   20333 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:12.559020   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:12.559202   20333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:12.559420   20333 out.go:352] Setting JSON to false
	I1105 10:07:12.559443   20333 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:12.559479   20333 notify.go:220] Checking for updates...
	I1105 10:07:12.559810   20333 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:12.559831   20333 status.go:174] checking status of ha-213000 ...
	I1105 10:07:12.560266   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.560326   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.571622   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58516
	I1105 10:07:12.571938   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.572372   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.572382   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.572583   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.572692   20333 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:12.572783   20333 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:12.572860   20333 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:12.574074   20333 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:12.574092   20333 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:12.574342   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.574365   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.588836   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58518
	I1105 10:07:12.589162   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.589468   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.589476   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.589680   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.589769   20333 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:12.589866   20333 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:12.590140   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.590170   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.601178   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58520
	I1105 10:07:12.601471   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.601859   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.601883   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.602106   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.602209   20333 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:12.602386   20333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:12.602407   20333 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:12.602484   20333 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:12.602574   20333 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:12.602665   20333 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:12.602747   20333 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:12.636973   20333 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:12.641302   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:12.653680   20333 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:12.653705   20333 api_server.go:166] Checking apiserver status ...
	I1105 10:07:12.653760   20333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:12.665451   20333 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:12.672708   20333 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:12.672775   20333 ssh_runner.go:195] Run: ls
	I1105 10:07:12.676426   20333 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:12.679570   20333 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:12.679582   20333 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:12.679601   20333 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:12.679616   20333 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:12.679881   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.679904   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.690984   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58524
	I1105 10:07:12.691304   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.691630   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.691642   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.691832   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.691929   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:12.692019   20333 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:12.692084   20333 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:12.693325   20333 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:12.693335   20333 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:12.693588   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.693609   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.704626   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58526
	I1105 10:07:12.705018   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.705392   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.705403   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.705618   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.705711   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:12.705792   20333 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:12.706054   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.706076   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.717404   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58528
	I1105 10:07:12.717726   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.718082   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.718101   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.718333   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.718445   20333 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:12.718615   20333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:12.718629   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:12.718711   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:12.718793   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:12.718885   20333 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:12.718969   20333 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:12.748578   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:12.759778   20333 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:12.759793   20333 api_server.go:166] Checking apiserver status ...
	I1105 10:07:12.759847   20333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:12.769893   20333 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:12.769905   20333 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:12.769910   20333 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:12.769924   20333 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:12.770204   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.770226   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.781511   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58531
	I1105 10:07:12.781843   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.782158   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.782169   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.782401   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.782510   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:12.782601   20333 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:12.782670   20333 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:12.783888   20333 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:12.783898   20333 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:12.784164   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.784191   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.795236   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58533
	I1105 10:07:12.795571   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.795923   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.795959   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.796168   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.796262   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:12.796374   20333 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:12.796657   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.796684   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.807961   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58535
	I1105 10:07:12.808309   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.808650   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.808660   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.808897   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.809006   20333 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:12.809181   20333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:12.809198   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:12.809278   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:12.809357   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:12.809434   20333 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:12.809519   20333 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:12.839060   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:12.849782   20333 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:12.849798   20333 api_server.go:166] Checking apiserver status ...
	I1105 10:07:12.849848   20333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:12.860653   20333 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:12.867943   20333 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:12.868017   20333 ssh_runner.go:195] Run: ls
	I1105 10:07:12.871302   20333 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:12.874443   20333 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:12.874453   20333 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:12.874458   20333 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:12.874466   20333 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:12.874722   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.874742   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.885792   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58539
	I1105 10:07:12.886102   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.886438   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.886449   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.886665   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.886760   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:12.886856   20333 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:12.886926   20333 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:12.888312   20333 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:12.888322   20333 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:12.888592   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.888617   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.899953   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58541
	I1105 10:07:12.900261   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.900587   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.900599   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.900815   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.900914   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:12.900997   20333 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:12.901264   20333 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:12.901290   20333 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:12.912508   20333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58543
	I1105 10:07:12.912835   20333 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:12.913177   20333 main.go:141] libmachine: Using API Version  1
	I1105 10:07:12.913191   20333 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:12.913404   20333 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:12.913519   20333 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:12.913670   20333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:12.913681   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:12.913773   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:12.913853   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:12.913963   20333 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:12.914039   20333 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:12.945470   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:12.956810   20333 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:12.960480   17842 retry.go:31] will retry after 2.839431531s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (464.517598ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:15.868756   20347 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:15.868989   20347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:15.868997   20347 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:15.869001   20347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:15.869189   20347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:15.869382   20347 out.go:352] Setting JSON to false
	I1105 10:07:15.869405   20347 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:15.869442   20347 notify.go:220] Checking for updates...
	I1105 10:07:15.869764   20347 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:15.869788   20347 status.go:174] checking status of ha-213000 ...
	I1105 10:07:15.870232   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:15.870289   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:15.881832   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58547
	I1105 10:07:15.882169   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:15.882603   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:15.882614   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:15.882867   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:15.882986   20347 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:15.883086   20347 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:15.883147   20347 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:15.884341   20347 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:15.884358   20347 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:15.884619   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:15.884640   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:15.898069   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58549
	I1105 10:07:15.898432   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:15.898757   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:15.898768   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:15.898972   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:15.899070   20347 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:15.899153   20347 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:15.899427   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:15.899448   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:15.910629   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58551
	I1105 10:07:15.910931   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:15.911256   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:15.911273   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:15.911502   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:15.911603   20347 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:15.911789   20347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:15.911812   20347 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:15.911932   20347 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:15.912025   20347 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:15.912113   20347 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:15.912194   20347 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:15.945838   20347 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:15.950274   20347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:15.961914   20347 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:15.961938   20347 api_server.go:166] Checking apiserver status ...
	I1105 10:07:15.961993   20347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:15.974595   20347 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:15.982770   20347 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:15.982832   20347 ssh_runner.go:195] Run: ls
	I1105 10:07:15.985880   20347 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:15.989346   20347 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:15.989358   20347 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:15.989365   20347 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:15.989376   20347 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:15.989641   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:15.989665   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.001362   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58555
	I1105 10:07:16.001768   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.002093   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.002105   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.002336   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.002436   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:16.002524   20347 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:16.002612   20347 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:16.003852   20347 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:16.003862   20347 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:16.004147   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.004173   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.015195   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58557
	I1105 10:07:16.015506   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.015844   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.015861   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.016090   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.016182   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:16.016278   20347 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:16.016535   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.016559   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.027606   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58559
	I1105 10:07:16.027917   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.028267   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.028282   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.028466   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.028576   20347 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:16.028739   20347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:16.028759   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:16.028839   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:16.028933   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:16.029012   20347 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:16.029101   20347 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:16.058512   20347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:16.068999   20347 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:16.069012   20347 api_server.go:166] Checking apiserver status ...
	I1105 10:07:16.069067   20347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:16.079073   20347 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:16.079086   20347 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:16.079091   20347 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:16.079101   20347 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:16.079402   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.079424   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.090798   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58562
	I1105 10:07:16.091128   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.091454   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.091466   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.091697   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.091813   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:16.091908   20347 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:16.092039   20347 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:16.093294   20347 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:16.093302   20347 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:16.093564   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.093608   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.104759   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58564
	I1105 10:07:16.105074   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.105427   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.105443   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.105650   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.105747   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:16.105846   20347 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:16.106118   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.106141   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.117171   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58566
	I1105 10:07:16.117512   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.117821   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.117832   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.118056   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.118164   20347 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:16.118330   20347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:16.118341   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:16.118413   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:16.118494   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:16.118584   20347 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:16.118665   20347 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:16.147312   20347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:16.157659   20347 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:16.157678   20347 api_server.go:166] Checking apiserver status ...
	I1105 10:07:16.157731   20347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:16.168642   20347 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:16.176111   20347 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:16.176177   20347 ssh_runner.go:195] Run: ls
	I1105 10:07:16.179653   20347 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:16.182800   20347 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:16.182812   20347 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:16.182818   20347 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:16.182827   20347 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:16.183090   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.183108   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.194350   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58570
	I1105 10:07:16.194670   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.194985   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.194998   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.195207   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.195303   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:16.195410   20347 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:16.195475   20347 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:16.196752   20347 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:16.196762   20347 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:16.197019   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.197052   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.208212   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58572
	I1105 10:07:16.208595   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.208936   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.208948   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.209161   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.209242   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:16.209329   20347 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:16.209594   20347 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:16.209614   20347 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:16.220923   20347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58574
	I1105 10:07:16.221287   20347 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:16.221647   20347 main.go:141] libmachine: Using API Version  1
	I1105 10:07:16.221661   20347 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:16.221869   20347 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:16.221995   20347 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:16.222212   20347 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:16.222234   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:16.222327   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:16.222422   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:16.222505   20347 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:16.222585   20347 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:16.252224   20347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:16.263812   20347 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:16.267489   17842 retry.go:31] will retry after 2.774583184s: exit status 2
E1105 10:07:17.858012   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (463.600775ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:19.109129   20366 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:19.109994   20366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:19.110003   20366 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:19.110009   20366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:19.110579   20366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:19.110783   20366 out.go:352] Setting JSON to false
	I1105 10:07:19.110806   20366 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:19.110861   20366 notify.go:220] Checking for updates...
	I1105 10:07:19.111173   20366 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:19.111197   20366 status.go:174] checking status of ha-213000 ...
	I1105 10:07:19.111631   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.111672   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.123341   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58578
	I1105 10:07:19.123652   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.124059   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.124069   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.124304   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.124396   20366 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:19.124490   20366 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:19.124560   20366 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:19.125757   20366 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:19.125774   20366 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:19.126034   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.126059   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.136924   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58580
	I1105 10:07:19.137239   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.137544   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.137553   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.137814   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.137922   20366 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:19.138021   20366 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:19.138310   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.138338   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.149239   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58582
	I1105 10:07:19.149551   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.149872   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.149883   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.150068   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.150159   20366 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:19.150342   20366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:19.150360   20366 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:19.150444   20366 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:19.150521   20366 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:19.150601   20366 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:19.150688   20366 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:19.186735   20366 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:19.191574   20366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:19.202396   20366 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:19.202420   20366 api_server.go:166] Checking apiserver status ...
	I1105 10:07:19.202471   20366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:19.214013   20366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:19.221724   20366 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:19.221807   20366 ssh_runner.go:195] Run: ls
	I1105 10:07:19.225047   20366 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:19.228254   20366 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:19.228267   20366 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:19.228272   20366 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:19.228282   20366 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:19.228543   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.228565   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.240053   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58586
	I1105 10:07:19.240444   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.240912   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.240932   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.241184   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.241288   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:19.241375   20366 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:19.241461   20366 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:19.242705   20366 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:19.242715   20366 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:19.242979   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.243012   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.253963   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58588
	I1105 10:07:19.254289   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.254586   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.254596   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.254789   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.254884   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:19.254994   20366 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:19.255265   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.255293   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.266363   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58590
	I1105 10:07:19.266672   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.267051   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.267068   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.267296   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.267404   20366 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:19.267558   20366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:19.267571   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:19.267656   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:19.267733   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:19.267807   20366 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:19.267891   20366 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:19.297119   20366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:19.308106   20366 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:19.308120   20366 api_server.go:166] Checking apiserver status ...
	I1105 10:07:19.308173   20366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:19.318132   20366 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:19.318144   20366 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:19.318149   20366 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:19.318158   20366 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:19.318436   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.318458   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.329861   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58593
	I1105 10:07:19.330182   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.330525   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.330539   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.330748   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.330841   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:19.330924   20366 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:19.330987   20366 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:19.332208   20366 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:19.332217   20366 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:19.332488   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.332519   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.343616   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58595
	I1105 10:07:19.344020   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.344373   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.344383   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.344594   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.344695   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:19.344782   20366 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:19.345041   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.345065   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.356218   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58597
	I1105 10:07:19.356568   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.356889   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.356900   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.357130   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.357231   20366 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:19.357392   20366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:19.357404   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:19.357480   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:19.357561   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:19.357647   20366 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:19.357728   20366 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:19.386979   20366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:19.397843   20366 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:19.397859   20366 api_server.go:166] Checking apiserver status ...
	I1105 10:07:19.397923   20366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:19.409036   20366 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:19.416181   20366 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:19.416239   20366 ssh_runner.go:195] Run: ls
	I1105 10:07:19.419722   20366 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:19.422915   20366 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:19.422927   20366 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:19.422932   20366 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:19.422941   20366 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:19.423229   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.423251   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.434466   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58601
	I1105 10:07:19.434787   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.435150   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.435164   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.435389   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.435475   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:19.435554   20366 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:19.435627   20366 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:19.436860   20366 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:19.436869   20366 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:19.437125   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.437166   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.448220   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58603
	I1105 10:07:19.448545   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.448887   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.448897   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.449138   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.449247   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:19.449353   20366 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:19.449635   20366 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:19.449661   20366 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:19.460658   20366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58605
	I1105 10:07:19.461041   20366 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:19.461416   20366 main.go:141] libmachine: Using API Version  1
	I1105 10:07:19.461432   20366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:19.461663   20366 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:19.461777   20366 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:19.461953   20366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:19.461964   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:19.462054   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:19.462148   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:19.462241   20366 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:19.462370   20366 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:19.492086   20366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:19.503511   20366 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:19.507016   17842 retry.go:31] will retry after 8.941814328s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (473.170151ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:28.517624   20381 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:28.517945   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:28.517951   20381 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:28.517954   20381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:28.518132   20381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:28.518314   20381 out.go:352] Setting JSON to false
	I1105 10:07:28.518337   20381 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:28.518371   20381 notify.go:220] Checking for updates...
	I1105 10:07:28.518696   20381 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:28.518717   20381 status.go:174] checking status of ha-213000 ...
	I1105 10:07:28.519125   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.519168   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.530610   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58609
	I1105 10:07:28.531006   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.531418   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.531429   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.531639   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.531733   20381 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:28.531812   20381 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:28.531879   20381 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:28.533027   20381 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:28.533043   20381 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:28.533280   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.533307   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.544143   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58611
	I1105 10:07:28.544474   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.544802   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.544812   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.545073   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.545186   20381 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:28.545279   20381 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:28.545554   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.545582   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.556429   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58613
	I1105 10:07:28.556707   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.557069   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.557082   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.557306   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.557405   20381 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:28.557574   20381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:28.557594   20381 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:28.557674   20381 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:28.557760   20381 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:28.557849   20381 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:28.557926   20381 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:28.591992   20381 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:28.596791   20381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:28.608184   20381 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:28.608211   20381 api_server.go:166] Checking apiserver status ...
	I1105 10:07:28.608268   20381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:28.621143   20381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:28.629549   20381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:28.629623   20381 ssh_runner.go:195] Run: ls
	I1105 10:07:28.632951   20381 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:28.639771   20381 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:28.639784   20381 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:28.639790   20381 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:28.639802   20381 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:28.640102   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.640125   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.651282   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58617
	I1105 10:07:28.651611   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.651953   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.651968   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.652191   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.652287   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:28.652383   20381 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:28.652457   20381 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:28.653648   20381 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:28.653657   20381 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:28.653929   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.653954   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.664857   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58619
	I1105 10:07:28.665185   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.665567   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.665594   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.665823   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.665935   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:28.666032   20381 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:28.666306   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.666338   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.677452   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58621
	I1105 10:07:28.677797   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.678173   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.678187   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.678424   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.678534   20381 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:28.678704   20381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:28.678716   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:28.678828   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:28.678917   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:28.679012   20381 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:28.679098   20381 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:28.708301   20381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:28.719260   20381 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:28.719274   20381 api_server.go:166] Checking apiserver status ...
	I1105 10:07:28.719327   20381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:28.729234   20381 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:28.729244   20381 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:28.729250   20381 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:28.729259   20381 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:28.729534   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.729556   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.740611   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58624
	I1105 10:07:28.740938   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.741270   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.741285   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.741492   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.741593   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:28.741693   20381 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:28.741757   20381 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:28.742929   20381 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:28.742938   20381 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:28.743229   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.743254   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.754313   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58626
	I1105 10:07:28.754736   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.755129   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.755144   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.755354   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.755460   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:28.755548   20381 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:28.755812   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.755834   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.766869   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58628
	I1105 10:07:28.767214   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.767593   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.767608   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.767870   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.767989   20381 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:28.768169   20381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:28.768183   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:28.768284   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:28.768389   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:28.768494   20381 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:28.768597   20381 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:28.798123   20381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:28.809255   20381 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:28.809269   20381 api_server.go:166] Checking apiserver status ...
	I1105 10:07:28.809327   20381 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:28.820498   20381 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:28.827665   20381 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:28.827735   20381 ssh_runner.go:195] Run: ls
	I1105 10:07:28.831031   20381 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:28.834338   20381 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:28.834351   20381 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:28.834357   20381 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:28.834366   20381 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:28.834623   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.834658   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.845817   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58632
	I1105 10:07:28.846155   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.846516   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.846533   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.846742   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.846844   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:28.846944   20381 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:28.847008   20381 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:28.848185   20381 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:28.848194   20381 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:28.848469   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.848492   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.859441   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58634
	I1105 10:07:28.859776   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.860103   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.860112   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.860334   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.860438   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:28.860537   20381 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:28.860801   20381 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:28.860824   20381 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:28.872088   20381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58636
	I1105 10:07:28.872435   20381 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:28.872757   20381 main.go:141] libmachine: Using API Version  1
	I1105 10:07:28.872766   20381 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:28.872993   20381 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:28.873102   20381 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:28.873252   20381 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:28.873264   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:28.873337   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:28.873428   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:28.873509   20381 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:28.873591   20381 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:28.902988   20381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:28.919618   20381 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:28.923439   17842 retry.go:31] will retry after 14.501028145s: exit status 2
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (468.919591ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:43.493165   20400 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:43.493395   20400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:43.493400   20400 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:43.493404   20400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:43.493569   20400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:43.493752   20400 out.go:352] Setting JSON to false
	I1105 10:07:43.493774   20400 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:43.493815   20400 notify.go:220] Checking for updates...
	I1105 10:07:43.494130   20400 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:43.494152   20400 status.go:174] checking status of ha-213000 ...
	I1105 10:07:43.494581   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.494632   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.506542   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58640
	I1105 10:07:43.506875   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.507277   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.507306   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.507516   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.507606   20400 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:43.507701   20400 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:43.507757   20400 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:43.508896   20400 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:43.508917   20400 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:43.509158   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.509178   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.522665   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58642
	I1105 10:07:43.522981   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.523290   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.523307   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.523503   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.523602   20400 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:43.523699   20400 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:43.523962   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.523982   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.534893   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58644
	I1105 10:07:43.535209   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.535537   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.535546   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.535774   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.535875   20400 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:43.536045   20400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:43.536065   20400 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:43.536148   20400 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:43.536220   20400 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:43.536341   20400 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:43.536433   20400 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:43.570653   20400 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:43.574928   20400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:43.587008   20400 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:43.587034   20400 api_server.go:166] Checking apiserver status ...
	I1105 10:07:43.587089   20400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:43.599307   20400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:43.607316   20400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:43.607373   20400 ssh_runner.go:195] Run: ls
	I1105 10:07:43.610507   20400 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:43.614681   20400 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:43.614695   20400 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:43.614704   20400 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:43.614719   20400 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:43.615007   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.615028   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.626060   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58648
	I1105 10:07:43.626440   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.626771   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.626779   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.626987   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.627075   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:43.627188   20400 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:43.627239   20400 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:43.628359   20400 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:43.628367   20400 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:43.628641   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.628663   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.639528   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58650
	I1105 10:07:43.639861   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.640242   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.640257   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.640512   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.640631   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:43.640747   20400 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:43.641032   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.641065   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.651878   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58652
	I1105 10:07:43.652175   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.652482   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.652492   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.652740   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.652832   20400 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:43.652982   20400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:43.653001   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:43.653085   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:43.653181   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:43.653260   20400 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:43.653343   20400 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:43.683423   20400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:43.694135   20400 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:43.694148   20400 api_server.go:166] Checking apiserver status ...
	I1105 10:07:43.694198   20400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:43.704275   20400 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:43.704286   20400 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:43.704291   20400 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:43.704300   20400 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:43.704587   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.704612   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.715769   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58655
	I1105 10:07:43.716064   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.716415   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.716432   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.716665   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.716770   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:43.716870   20400 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:43.716937   20400 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:43.718077   20400 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:43.718084   20400 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:43.718354   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.718380   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.729441   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58657
	I1105 10:07:43.729743   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.730090   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.730101   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.730319   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.730416   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:43.730518   20400 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:43.730776   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.730798   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.741887   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58659
	I1105 10:07:43.742224   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.742576   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.742593   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.742830   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.742947   20400 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:43.743090   20400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:43.743104   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:43.743181   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:43.743262   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:43.743342   20400 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:43.743419   20400 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:43.772801   20400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:43.785020   20400 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:43.785039   20400 api_server.go:166] Checking apiserver status ...
	I1105 10:07:43.785092   20400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:43.797143   20400 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:43.805358   20400 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:43.805428   20400 ssh_runner.go:195] Run: ls
	I1105 10:07:43.808487   20400 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:43.811471   20400 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:43.811483   20400 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:43.811488   20400 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:43.811497   20400 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:43.811766   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.811801   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.822874   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58663
	I1105 10:07:43.823188   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.823537   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.823552   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.823774   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.823871   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:43.823966   20400 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:43.824034   20400 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:43.825217   20400 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:43.825225   20400 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:43.825484   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.825510   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.836581   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58665
	I1105 10:07:43.836908   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.837270   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.837288   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.837506   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.837608   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:43.837708   20400 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:43.837983   20400 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:43.838012   20400 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:43.848845   20400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58667
	I1105 10:07:43.849152   20400 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:43.849507   20400 main.go:141] libmachine: Using API Version  1
	I1105 10:07:43.849530   20400 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:43.849735   20400 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:43.849846   20400 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:43.850005   20400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:43.850017   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:43.850114   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:43.850198   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:43.850279   20400 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:43.850368   20400 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:43.880699   20400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:43.892000   20400 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1105 10:07:43.896247   17842 retry.go:31] will retry after 11.510685185s: exit status 2
E1105 10:07:54.201886   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (465.041873ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:07:55.475431   20417 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:07:55.475692   20417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:55.475700   20417 out.go:358] Setting ErrFile to fd 2...
	I1105 10:07:55.475705   20417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:07:55.475903   20417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:07:55.476149   20417 out.go:352] Setting JSON to false
	I1105 10:07:55.476175   20417 mustload.go:65] Loading cluster: ha-213000
	I1105 10:07:55.476227   20417 notify.go:220] Checking for updates...
	I1105 10:07:55.476611   20417 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:07:55.476635   20417 status.go:174] checking status of ha-213000 ...
	I1105 10:07:55.477106   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.477141   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.488769   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58671
	I1105 10:07:55.489086   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.489509   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.489536   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.489750   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.489844   20417 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:07:55.489933   20417 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:55.489998   20417 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:07:55.491131   20417 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:07:55.491148   20417 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:55.491437   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.491461   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.506209   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58673
	I1105 10:07:55.506552   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.506872   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.506885   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.507094   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.507183   20417 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:07:55.507278   20417 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:07:55.507531   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.507558   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.518498   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58675
	I1105 10:07:55.518780   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.519167   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.519189   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.519398   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.519496   20417 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:07:55.519651   20417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:55.519676   20417 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:07:55.519753   20417 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:07:55.519854   20417 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:07:55.519940   20417 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:07:55.520025   20417 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:07:55.554836   20417 ssh_runner.go:195] Run: systemctl --version
	I1105 10:07:55.559391   20417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:55.570615   20417 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:55.570638   20417 api_server.go:166] Checking apiserver status ...
	I1105 10:07:55.570701   20417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:55.582319   20417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:07:55.589521   20417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:55.589578   20417 ssh_runner.go:195] Run: ls
	I1105 10:07:55.592759   20417 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:55.597225   20417 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:55.597238   20417 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:07:55.597246   20417 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:55.597258   20417 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:07:55.597533   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.597554   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.608701   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58679
	I1105 10:07:55.609026   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.609339   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.609349   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.609578   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.609677   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:07:55.609760   20417 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:55.609836   20417 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20260
	I1105 10:07:55.610971   20417 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:07:55.610979   20417 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:55.611245   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.611268   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.622261   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58681
	I1105 10:07:55.622568   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.622881   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.622892   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.623124   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.623225   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:07:55.623323   20417 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:07:55.623579   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.623604   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.634648   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58683
	I1105 10:07:55.634972   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.635333   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.635347   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.635578   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.635675   20417 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:07:55.635857   20417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:55.635871   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:07:55.635954   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:07:55.636065   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:07:55.636141   20417 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:07:55.636228   20417 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:07:55.665264   20417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:55.676281   20417 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:55.676294   20417 api_server.go:166] Checking apiserver status ...
	I1105 10:07:55.676351   20417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1105 10:07:55.686100   20417 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:55.686110   20417 status.go:463] ha-213000-m02 apiserver status = Stopped (err=<nil>)
	I1105 10:07:55.686115   20417 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:55.686123   20417 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:07:55.686418   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.686440   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.697609   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58686
	I1105 10:07:55.697923   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.698302   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.698325   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.698554   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.698660   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:07:55.698767   20417 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:55.698841   20417 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:07:55.699998   20417 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:07:55.700007   20417 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:55.700270   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.700304   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.711364   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58688
	I1105 10:07:55.711690   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.712028   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.712046   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.712268   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.712366   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:07:55.712445   20417 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:07:55.712706   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.712729   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.723627   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58690
	I1105 10:07:55.723933   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.724259   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.724283   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.724485   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.724581   20417 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:07:55.724727   20417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:55.724738   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:07:55.724819   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:07:55.724896   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:07:55.724973   20417 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:07:55.725080   20417 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:07:55.754667   20417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:55.765535   20417 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:07:55.765549   20417 api_server.go:166] Checking apiserver status ...
	I1105 10:07:55.765601   20417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:07:55.776992   20417 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:07:55.784412   20417 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:07:55.784478   20417 ssh_runner.go:195] Run: ls
	I1105 10:07:55.787776   20417 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:07:55.790863   20417 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:07:55.790876   20417 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:07:55.790881   20417 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:07:55.790890   20417 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:07:55.791166   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.791189   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.802294   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58694
	I1105 10:07:55.802600   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.802965   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.802980   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.803191   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.803285   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:07:55.803376   20417 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:07:55.803436   20417 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:07:55.804572   20417 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:07:55.804580   20417 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:55.804837   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.804860   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.815965   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58696
	I1105 10:07:55.816308   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.816643   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.816658   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.816896   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.816999   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:07:55.817105   20417 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:07:55.817365   20417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:07:55.817396   20417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:07:55.828324   20417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58698
	I1105 10:07:55.828640   20417 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:07:55.828962   20417 main.go:141] libmachine: Using API Version  1
	I1105 10:07:55.828972   20417 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:07:55.829166   20417 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:07:55.829260   20417 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:07:55.829416   20417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:07:55.829427   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:07:55.829517   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:07:55.829592   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:07:55.829682   20417 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:07:55.829761   20417 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:07:55.859231   20417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:07:55.870160   20417 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (2.51881883s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m03_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:00:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:00:48.477016   19703 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:00:48.477674   19703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:48.477680   19703 out.go:358] Setting ErrFile to fd 2...
	I1105 10:00:48.477684   19703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:48.477879   19703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:00:48.479709   19703 out.go:352] Setting JSON to false
	I1105 10:00:48.510951   19703 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7217,"bootTime":1730822431,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:00:48.511118   19703 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:00:48.569600   19703 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:00:48.610699   19703 notify.go:220] Checking for updates...
	I1105 10:00:48.634693   19703 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:00:48.698776   19703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:00:48.753700   19703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:00:48.775781   19703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:00:48.796789   19703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:48.817657   19703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:00:48.839040   19703 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:00:48.871720   19703 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:00:48.913701   19703 start.go:297] selected driver: hyperkit
	I1105 10:00:48.913733   19703 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:00:48.913751   19703 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:00:48.920486   19703 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:00:48.920632   19703 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:00:48.931479   19703 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:00:48.937804   19703 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:48.937824   19703 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:00:48.937857   19703 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:00:48.938103   19703 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:00:48.938134   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:00:48.938170   19703 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 10:00:48.938175   19703 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 10:00:48.938248   19703 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 10:00:48.938346   19703 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:00:48.959836   19703 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:00:49.001660   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:00:49.001712   19703 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:00:49.001743   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:00:49.001910   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:00:49.001924   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:00:49.002321   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:00:49.002355   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json: {Name:mk69fb3d9aca0b41d8bea722484079aba6357863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:00:49.002868   19703 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:00:49.002969   19703 start.go:364] duration metric: took 85.161µs to acquireMachinesLock for "ha-213000"
	I1105 10:00:49.003007   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:00:49.003069   19703 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:00:49.024722   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:00:49.024964   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:49.025013   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:49.037332   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57614
	I1105 10:00:49.037758   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:49.038202   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:00:49.038215   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:49.038496   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:49.038622   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:00:49.038725   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:49.038849   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:00:49.038876   19703 client.go:168] LocalClient.Create starting
	I1105 10:00:49.038916   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:00:49.038980   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:00:49.038997   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:00:49.039061   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:00:49.039108   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:00:49.039119   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:00:49.039131   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:00:49.039140   19703 main.go:141] libmachine: (ha-213000) Calling .PreCreateCheck
	I1105 10:00:49.039304   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.039463   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:00:49.045805   19703 main.go:141] libmachine: Creating machine...
	I1105 10:00:49.045812   19703 main.go:141] libmachine: (ha-213000) Calling .Create
	I1105 10:00:49.045899   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.046076   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.045898   19713 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:49.046146   19703 main.go:141] libmachine: (ha-213000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:00:49.239282   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.239158   19713 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa...
	I1105 10:00:49.422633   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.422543   19713 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk...
	I1105 10:00:49.422652   19703 main.go:141] libmachine: (ha-213000) DBG | Writing magic tar header
	I1105 10:00:49.422661   19703 main.go:141] libmachine: (ha-213000) DBG | Writing SSH key tar header
	I1105 10:00:49.422947   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.422905   19713 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000 ...
	I1105 10:00:49.801010   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.801025   19703 main.go:141] libmachine: (ha-213000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:00:49.801067   19703 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:00:49.968558   19703 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:00:49.968605   19703 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:00:49.968643   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000112720)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:00:49.968680   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000112720)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:00:49.968759   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:00:49.968801   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:00:49.968818   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:00:49.972369   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Pid is 19716
	I1105 10:00:49.973014   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:00:49.973034   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.973101   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:49.974438   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:49.974449   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:49.974461   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:49.974478   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:49.974495   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:49.985017   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:00:50.043482   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:00:50.044217   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:00:50.044239   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:00:50.044246   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:00:50.044251   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:00:50.437454   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:00:50.437468   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:00:50.552096   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:00:50.552115   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:00:50.552133   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:00:50.552146   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:00:50.553015   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:00:50.553028   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:00:51.975030   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 1
	I1105 10:00:51.975047   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:51.975058   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:51.976103   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:51.976148   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:51.976166   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:51.976186   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:51.976200   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:53.977051   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 2
	I1105 10:00:53.977066   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:53.977114   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:53.978147   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:53.978190   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:53.978202   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:53.978220   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:53.978233   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:55.979026   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 3
	I1105 10:00:55.979043   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:55.979100   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:55.980034   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:55.980091   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:55.980102   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:55.980125   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:55.980137   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:56.301268   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:00:56.301301   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:00:56.301310   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:00:56.324886   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:00:57.980637   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 4
	I1105 10:00:57.980652   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:57.980732   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:57.981684   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:57.981732   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:57.981742   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:57.981749   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:57.981757   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:59.983824   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 5
	I1105 10:00:59.983847   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:59.984007   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:59.985286   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:59.985368   19703 main.go:141] libmachine: (ha-213000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:00:59.985384   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:00:59.985396   19703 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:00:59.985402   19703 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:00:59.985465   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:00:59.986295   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:59.986452   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:59.986594   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:00:59.986606   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:00:59.986719   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:59.986810   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:59.988032   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:00:59.988083   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:00:59.988087   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:00:59.988112   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:00:59.988202   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:00:59.988332   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:00:59.988436   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:00:59.988528   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:00:59.988730   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:00:59.988975   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:00:59.988982   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:01:01.011155   19703 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1105 10:01:04.073024   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:04.073037   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:01:04.073043   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.073211   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.073307   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.073401   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.073493   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.073653   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.073811   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.073819   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:01:04.133464   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:01:04.133513   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:01:04.133519   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:01:04.133529   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.133678   19703 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:01:04.133689   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.133791   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.133872   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.133967   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.134069   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.134170   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.134305   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.134436   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.134444   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:01:04.206864   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:01:04.206883   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.207030   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.207140   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.207234   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.207324   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.207509   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.207699   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.207711   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:01:04.275310   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:04.275329   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:01:04.275346   19703 buildroot.go:174] setting up certificates
	I1105 10:01:04.275367   19703 provision.go:84] configureAuth start
	I1105 10:01:04.275378   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.275523   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:04.275627   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.275736   19703 provision.go:143] copyHostCerts
	I1105 10:01:04.275773   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:04.275854   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:01:04.275861   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:04.276002   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:01:04.276234   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:04.276283   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:01:04.276287   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:04.276380   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:01:04.276579   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:04.276626   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:01:04.276631   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:04.276750   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:01:04.276914   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:01:04.409758   19703 provision.go:177] copyRemoteCerts
	I1105 10:01:04.409836   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:01:04.409852   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.410004   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.410102   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.410207   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.410308   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:04.447116   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:01:04.447193   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:01:04.466891   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:01:04.466954   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 10:01:04.486228   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:01:04.486290   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:01:04.506082   19703 provision.go:87] duration metric: took 230.693486ms to configureAuth
	I1105 10:01:04.506098   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:01:04.506258   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:04.506272   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:04.506412   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.506508   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.506593   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.506676   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.506765   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.506897   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.507032   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.507040   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:01:04.567965   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:01:04.567978   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:01:04.568060   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:01:04.568074   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.568219   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.568335   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.568441   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.568552   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.568731   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.568876   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.568928   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:01:04.639803   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:01:04.639825   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.639961   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.640058   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.640141   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.640255   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.640420   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.640549   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.640561   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:01:06.214895   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:01:06.214911   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:01:06.214917   19703 main.go:141] libmachine: (ha-213000) Calling .GetURL
	I1105 10:01:06.215063   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:01:06.215071   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:01:06.215076   19703 client.go:171] duration metric: took 17.176350291s to LocalClient.Create
	I1105 10:01:06.215089   19703 start.go:167] duration metric: took 17.176396472s to libmachine.API.Create "ha-213000"
	I1105 10:01:06.215099   19703 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:01:06.215106   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:01:06.215116   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.215261   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:01:06.215274   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.215361   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.215442   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.215528   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.215620   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.251640   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:01:06.255113   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:01:06.255129   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:01:06.255230   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:01:06.255446   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:01:06.255453   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:01:06.255711   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:01:06.263216   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:06.283684   19703 start.go:296] duration metric: took 68.576557ms for postStartSetup
	I1105 10:01:06.283726   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:01:06.284405   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:06.284548   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:06.284926   19703 start.go:128] duration metric: took 17.282000829s to createHost
	I1105 10:01:06.284941   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.285030   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.285125   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.285202   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.285269   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.285398   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:06.285521   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:06.285528   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:01:06.344331   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829665.881654888
	
	I1105 10:01:06.344343   19703 fix.go:216] guest clock: 1730829665.881654888
	I1105 10:01:06.344347   19703 fix.go:229] Guest: 2024-11-05 10:01:05.881654888 -0800 PST Remote: 2024-11-05 10:01:06.284934 -0800 PST m=+17.850547767 (delta=-403.279112ms)
	I1105 10:01:06.344367   19703 fix.go:200] guest clock delta is within tolerance: -403.279112ms
	I1105 10:01:06.344370   19703 start.go:83] releasing machines lock for "ha-213000", held for 17.341551607s
	I1105 10:01:06.344388   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.344528   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:06.344623   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.344951   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.345054   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.345149   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:01:06.345178   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.345208   19703 ssh_runner.go:195] Run: cat /version.json
	I1105 10:01:06.345219   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.345270   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.345332   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.345365   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.345442   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.345461   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.345518   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.345559   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.345639   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.378204   19703 ssh_runner.go:195] Run: systemctl --version
	I1105 10:01:06.426915   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:01:06.431535   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:01:06.431591   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:01:06.445898   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:01:06.445913   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:06.446023   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:06.460899   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:01:06.469852   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:01:06.478814   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:01:06.478874   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:01:06.487613   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:06.496557   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:01:06.505258   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:06.514169   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:01:06.524040   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:01:06.533030   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:01:06.541790   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:01:06.550841   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:01:06.558861   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:01:06.558919   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:01:06.568040   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:01:06.576174   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:06.680889   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:01:06.699975   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:06.700071   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:01:06.713715   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:06.724704   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:01:06.743034   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:06.753151   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:06.764276   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:01:06.804304   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:06.815447   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:06.838920   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:01:06.842715   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:01:06.857786   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:01:06.875540   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:01:06.983809   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:01:07.086590   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:01:07.086669   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:01:07.101565   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:07.202392   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:09.490529   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.288138695s)
	I1105 10:01:09.490615   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:01:09.502437   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:01:09.516436   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:09.526819   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:01:09.622839   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:01:09.716251   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:09.826522   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:01:09.839888   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:09.850796   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:09.959403   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:01:10.017340   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:01:10.017457   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:01:10.021721   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:01:10.021786   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:01:10.024691   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:01:10.049837   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:01:10.049922   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:10.066022   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:10.125079   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:01:10.125132   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:10.125605   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:01:10.129273   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:10.139143   19703 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:01:10.139212   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:01:10.139280   19703 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:01:10.150154   19703 docker.go:689] Got preloaded images: 
	I1105 10:01:10.150166   19703 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1105 10:01:10.150233   19703 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1105 10:01:10.157824   19703 ssh_runner.go:195] Run: which lz4
	I1105 10:01:10.160636   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 10:01:10.160780   19703 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 10:01:10.163841   19703 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 10:01:10.163861   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1105 10:01:11.159350   19703 docker.go:653] duration metric: took 998.641869ms to copy over tarball
	I1105 10:01:11.159432   19703 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 10:01:13.249323   19703 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089892673s)
	I1105 10:01:13.249340   19703 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 10:01:13.274325   19703 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1105 10:01:13.282463   19703 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1105 10:01:13.296334   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:13.388712   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:15.739164   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.350454399s)
	I1105 10:01:15.739273   19703 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:01:15.754024   19703 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1105 10:01:15.754043   19703 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:01:15.754049   19703 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:01:15.754140   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:01:15.754227   19703 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:01:15.788737   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:01:15.788757   19703 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 10:01:15.788770   19703 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:01:15.788787   19703 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:01:15.788861   19703 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:01:15.788877   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:01:15.788942   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:01:15.801675   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:01:15.801751   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:01:15.801824   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:15.809490   19703 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:01:15.809553   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:01:15.816819   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:01:15.831209   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:01:15.844621   19703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:01:15.857998   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1105 10:01:15.871169   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:01:15.874131   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:15.883385   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:15.976109   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:01:15.992221   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:01:15.992233   19703 certs.go:194] generating shared ca certs ...
	I1105 10:01:15.992243   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:15.992461   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:01:15.992552   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:01:15.992562   19703 certs.go:256] generating profile certs ...
	I1105 10:01:15.992612   19703 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:01:15.992624   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt with IP's: []
	I1105 10:01:16.094282   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt ...
	I1105 10:01:16.094299   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt: {Name:mk32df45c928182ea5273921e15df540dba3284b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.094649   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key ...
	I1105 10:01:16.094656   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key: {Name:mk4ba8eb16cdbfaf693d3586557970b225775c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.094907   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3
	I1105 10:01:16.094921   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1105 10:01:16.166905   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 ...
	I1105 10:01:16.166920   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3: {Name:mk8e48df26de9447c3326b40118c66ea248d3cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.167265   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3 ...
	I1105 10:01:16.167275   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3: {Name:mkb555a3da1a71d498a5e7f44da4ed0baf461c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.167543   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:01:16.167743   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:01:16.167942   19703 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:01:16.167958   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt with IP's: []
	I1105 10:01:16.340393   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt ...
	I1105 10:01:16.340414   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt: {Name:mkad63aa252d0a246c051641017bfdd8bd78fbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.340763   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key ...
	I1105 10:01:16.340771   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key: {Name:mkc1a14cacaacc53921fd9d706ec801444580291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.341021   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:01:16.341051   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:01:16.341070   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:01:16.341091   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:01:16.341110   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:01:16.341129   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:01:16.341149   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:01:16.341171   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:01:16.341276   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:01:16.341338   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:01:16.341346   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:01:16.341376   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:01:16.341409   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:01:16.341438   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:01:16.341499   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:16.341533   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.341553   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.341577   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.342013   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:01:16.361630   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:01:16.380740   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:01:16.400614   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:01:16.420038   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 10:01:16.439653   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:01:16.458562   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:01:16.478643   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:01:16.497792   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:01:16.516678   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:01:16.535739   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:01:16.555130   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:01:16.569073   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:01:16.573341   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:01:16.582782   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.586227   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.586277   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.590528   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:01:16.599704   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:01:16.608870   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.612245   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.612298   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.616513   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:01:16.625608   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:01:16.635771   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.639310   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.639358   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.643770   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:01:16.654663   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:01:16.660794   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:01:16.660842   19703 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:01:16.660953   19703 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:01:16.677060   19703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:01:16.690427   19703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 10:01:16.700859   19703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 10:01:16.709261   19703 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 10:01:16.709272   19703 kubeadm.go:157] found existing configuration files:
	
	I1105 10:01:16.709351   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 10:01:16.718113   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 10:01:16.718192   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 10:01:16.726411   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 10:01:16.734224   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 10:01:16.734289   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 10:01:16.742733   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 10:01:16.750784   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 10:01:16.750844   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 10:01:16.759076   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 10:01:16.766845   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 10:01:16.766909   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 10:01:16.774996   19703 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 10:01:16.840437   19703 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 10:01:16.840491   19703 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 10:01:16.926763   19703 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 10:01:16.926877   19703 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 10:01:16.926980   19703 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 10:01:16.936091   19703 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 10:01:16.983362   19703 out.go:235]   - Generating certificates and keys ...
	I1105 10:01:16.983421   19703 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 10:01:16.983471   19703 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 10:01:17.072797   19703 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 10:01:17.179588   19703 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 10:01:17.306014   19703 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 10:01:17.631639   19703 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 10:01:17.770167   19703 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 10:01:17.770365   19703 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-213000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1105 10:01:18.036090   19703 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 10:01:18.036251   19703 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-213000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1105 10:01:18.099648   19703 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 10:01:18.290329   19703 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 10:01:18.487625   19703 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 10:01:18.487812   19703 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 10:01:18.631478   19703 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 10:01:18.780093   19703 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 10:01:18.888960   19703 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 10:01:19.168437   19703 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 10:01:19.347823   19703 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 10:01:19.348317   19703 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 10:01:19.350236   19703 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 10:01:19.371622   19703 out.go:235]   - Booting up control plane ...
	I1105 10:01:19.371724   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 10:01:19.371803   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 10:01:19.371856   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 10:01:19.371944   19703 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 10:01:19.372021   19703 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 10:01:19.372058   19703 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 10:01:19.481087   19703 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 10:01:19.481190   19703 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 10:01:20.488429   19703 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007994623s
	I1105 10:01:20.488531   19703 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 10:01:26.203663   19703 kubeadm.go:310] [api-check] The API server is healthy after 5.719526197s
	I1105 10:01:26.212624   19703 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 10:01:26.220645   19703 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 10:01:26.233694   19703 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 10:01:26.233859   19703 kubeadm.go:310] [mark-control-plane] Marking the node ha-213000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 10:01:26.244246   19703 kubeadm.go:310] [bootstrap-token] Using token: w4nohd.4e3143tllv8ohc8g
	I1105 10:01:26.284768   19703 out.go:235]   - Configuring RBAC rules ...
	I1105 10:01:26.284885   19703 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 10:01:26.286787   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 10:01:26.310075   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 10:01:26.312761   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 10:01:26.318937   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 10:01:26.322239   19703 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 10:01:26.608210   19703 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 10:01:27.037009   19703 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 10:01:27.610360   19703 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 10:01:27.611067   19703 kubeadm.go:310] 
	I1105 10:01:27.611117   19703 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 10:01:27.611123   19703 kubeadm.go:310] 
	I1105 10:01:27.611199   19703 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 10:01:27.611208   19703 kubeadm.go:310] 
	I1105 10:01:27.611229   19703 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 10:01:27.611277   19703 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 10:01:27.611341   19703 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 10:01:27.611352   19703 kubeadm.go:310] 
	I1105 10:01:27.611397   19703 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 10:01:27.611403   19703 kubeadm.go:310] 
	I1105 10:01:27.611451   19703 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 10:01:27.611459   19703 kubeadm.go:310] 
	I1105 10:01:27.611495   19703 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 10:01:27.611550   19703 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 10:01:27.611623   19703 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 10:01:27.611630   19703 kubeadm.go:310] 
	I1105 10:01:27.611697   19703 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 10:01:27.611766   19703 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 10:01:27.611773   19703 kubeadm.go:310] 
	I1105 10:01:27.611836   19703 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w4nohd.4e3143tllv8ohc8g \
	I1105 10:01:27.611921   19703 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 \
	I1105 10:01:27.611942   19703 kubeadm.go:310] 	--control-plane 
	I1105 10:01:27.611949   19703 kubeadm.go:310] 
	I1105 10:01:27.612027   19703 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 10:01:27.612038   19703 kubeadm.go:310] 
	I1105 10:01:27.612109   19703 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w4nohd.4e3143tllv8ohc8g \
	I1105 10:01:27.612190   19703 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 
	I1105 10:01:27.612839   19703 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 10:01:27.612851   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:01:27.612855   19703 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 10:01:27.638912   19703 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 10:01:27.682614   19703 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 10:01:27.687942   19703 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 10:01:27.687953   19703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 10:01:27.701992   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 10:01:27.936771   19703 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 10:01:27.936836   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:27.936838   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000 minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=true
	I1105 10:01:28.117503   19703 ops.go:34] apiserver oom_adj: -16
	I1105 10:01:28.117657   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:28.618627   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:29.117808   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:29.617729   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:30.119155   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:30.618084   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:31.118505   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:31.195673   19703 kubeadm.go:1113] duration metric: took 3.258930438s to wait for elevateKubeSystemPrivileges
	I1105 10:01:31.195694   19703 kubeadm.go:394] duration metric: took 14.534988132s to StartCluster
	I1105 10:01:31.195710   19703 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:31.195820   19703 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:01:31.196307   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:31.196590   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 10:01:31.196592   19703 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:31.196603   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:01:31.196618   19703 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:01:31.196671   19703 addons.go:69] Setting storage-provisioner=true in profile "ha-213000"
	I1105 10:01:31.196685   19703 addons.go:234] Setting addon storage-provisioner=true in "ha-213000"
	I1105 10:01:31.196691   19703 addons.go:69] Setting default-storageclass=true in profile "ha-213000"
	I1105 10:01:31.196703   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:31.196707   19703 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-213000"
	I1105 10:01:31.196741   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:31.196976   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.196986   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.196996   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.197000   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.208908   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57639
	I1105 10:01:31.209261   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.209642   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.209655   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.209868   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57641
	I1105 10:01:31.209885   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.210032   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.210143   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.210244   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.210251   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.210574   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.210584   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.210788   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.211192   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.211225   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.212394   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:01:31.213752   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:01:31.214400   19703 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:01:31.214537   19703 addons.go:234] Setting addon default-storageclass=true in "ha-213000"
	I1105 10:01:31.214564   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:31.214803   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.214828   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.223254   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57643
	I1105 10:01:31.223597   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.224001   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.224022   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.224270   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.224394   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.224509   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.224581   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.225831   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:31.226397   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57645
	I1105 10:01:31.226753   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.227096   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.227107   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.227355   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.227741   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.227767   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.238983   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57647
	I1105 10:01:31.239279   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.239639   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.239659   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.239882   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.239983   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.240069   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.240135   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.241282   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:31.241435   19703 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 10:01:31.241450   19703 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 10:01:31.241460   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:31.241543   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:31.241623   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:31.241696   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:31.241776   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:31.250526   19703 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 10:01:31.270056   19703 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 10:01:31.270068   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 10:01:31.270085   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:31.270249   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:31.270368   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:31.270493   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:31.270593   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:31.343009   19703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 10:01:31.358734   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 10:01:31.372889   19703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 10:01:31.583824   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.583836   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.584072   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.584088   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.584097   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.584114   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.584120   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.584249   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.584257   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.584263   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.584311   19703 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:01:31.584343   19703 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:01:31.584428   19703 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 10:01:31.584433   19703 round_trippers.go:469] Request Headers:
	I1105 10:01:31.584440   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:01:31.584445   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:01:31.589847   19703 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:01:31.590273   19703 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 10:01:31.590280   19703 round_trippers.go:469] Request Headers:
	I1105 10:01:31.590285   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:01:31.590289   19703 round_trippers.go:473]     Content-Type: application/json
	I1105 10:01:31.590292   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:01:31.591793   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:01:31.591915   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.591923   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.592075   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.592084   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.592098   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.661628   19703 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1105 10:01:31.799548   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.799567   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.799772   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.799790   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.799800   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.799817   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.799822   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.799950   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.799959   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.823619   19703 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1105 10:01:31.881388   19703 addons.go:510] duration metric: took 684.78194ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1105 10:01:31.881432   19703 start.go:246] waiting for cluster config update ...
	I1105 10:01:31.881446   19703 start.go:255] writing updated cluster config ...
	I1105 10:01:31.902486   19703 out.go:201] 
	I1105 10:01:31.940014   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:31.940131   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:31.962472   19703 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:01:32.004496   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:01:32.004517   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:01:32.004642   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:01:32.004651   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:01:32.004703   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:32.005148   19703 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:01:32.005220   19703 start.go:364] duration metric: took 59.105µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:01:32.005235   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:32.005275   19703 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1105 10:01:32.026387   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:01:32.026549   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:32.026581   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:32.038441   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57652
	I1105 10:01:32.038798   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:32.039196   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:32.039218   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:32.039447   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:32.039560   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:32.039666   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:32.039774   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:01:32.039792   19703 client.go:168] LocalClient.Create starting
	I1105 10:01:32.039824   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:01:32.039866   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:01:32.039878   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:01:32.039917   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:01:32.039950   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:01:32.039959   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:01:32.039978   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:01:32.039982   19703 main.go:141] libmachine: (ha-213000-m02) Calling .PreCreateCheck
	I1105 10:01:32.040065   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.040093   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:32.047652   19703 main.go:141] libmachine: Creating machine...
	I1105 10:01:32.047661   19703 main.go:141] libmachine: (ha-213000-m02) Calling .Create
	I1105 10:01:32.047736   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.047898   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.047732   19737 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:01:32.047955   19703 main.go:141] libmachine: (ha-213000-m02) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:01:32.258405   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.258328   19737 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa...
	I1105 10:01:32.370475   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.370420   19737 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk...
	I1105 10:01:32.370496   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Writing magic tar header
	I1105 10:01:32.370504   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Writing SSH key tar header
	I1105 10:01:32.371373   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.371253   19737 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02 ...
	I1105 10:01:32.760483   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.760499   19703 main.go:141] libmachine: (ha-213000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:01:32.760532   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:01:32.785150   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:01:32.785168   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:01:32.785208   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:01:32.785232   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:01:32.785286   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:01:32.785316   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:01:32.785326   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:01:32.788392   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Pid is 19738
	I1105 10:01:32.789760   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:01:32.789776   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.789838   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:32.790923   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:32.791036   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:32.791047   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:32.791055   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:32.791063   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:32.791071   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:32.799256   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:01:32.810076   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:01:32.811011   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:01:32.811039   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:01:32.811065   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:01:32.811083   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:01:33.216124   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:01:33.216141   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:01:33.331141   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:01:33.331187   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:01:33.331200   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:01:33.331210   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:01:33.331930   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:01:33.331952   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:01:34.791292   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 1
	I1105 10:01:34.791308   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:34.791415   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:34.792404   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:34.792463   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:34.792476   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:34.792486   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:34.792493   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:34.792500   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:36.794004   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 2
	I1105 10:01:36.794019   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:36.794104   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:36.795044   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:36.795099   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:36.795107   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:36.795115   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:36.795123   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:36.795143   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:38.796117   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 3
	I1105 10:01:38.796134   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:38.796192   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:38.797137   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:38.797198   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:38.797207   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:38.797215   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:38.797220   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:38.797228   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:39.085812   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:01:39.085887   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:01:39.085896   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:01:39.108556   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:01:40.797630   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 4
	I1105 10:01:40.797646   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:40.797725   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:40.798681   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:40.798749   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:40.798757   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:40.798766   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:40.798773   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:40.798785   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:42.800804   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 5
	I1105 10:01:42.800819   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:42.800888   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:42.801843   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:42.801914   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:01:42.801923   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:01:42.801933   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:01:42.801939   19703 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:01:42.802006   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:42.802642   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:42.802744   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:42.802850   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:01:42.802857   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:01:42.802937   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:42.802999   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:42.803924   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:01:42.803931   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:01:42.803935   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:01:42.803939   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:42.804024   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:42.804111   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:42.804205   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:42.804300   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:42.804436   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:42.804615   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:42.804623   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:01:43.860176   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:43.860188   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:01:43.860194   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.860339   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.860450   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.860549   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.860635   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.860782   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.860934   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.860943   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:01:43.918908   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:01:43.918939   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:01:43.918944   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:01:43.918953   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:43.919089   19703 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:01:43.919101   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:43.919200   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.919297   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.919385   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.919473   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.919562   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.919750   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.919884   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.919892   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:01:43.986937   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:01:43.986952   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.987088   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.987192   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.987282   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.987385   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.987525   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.987656   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.987668   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:01:44.049824   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:44.049837   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:01:44.049852   19703 buildroot.go:174] setting up certificates
	I1105 10:01:44.049859   19703 provision.go:84] configureAuth start
	I1105 10:01:44.049865   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:44.050000   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:44.050104   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.050210   19703 provision.go:143] copyHostCerts
	I1105 10:01:44.050243   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:44.050287   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:01:44.050293   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:44.050418   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:01:44.050628   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:44.050658   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:01:44.050663   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:44.050731   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:01:44.050902   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:44.050930   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:01:44.050935   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:44.050999   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:01:44.051159   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:01:44.155430   19703 provision.go:177] copyRemoteCerts
	I1105 10:01:44.155494   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:01:44.155508   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.155652   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.155761   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.155855   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.155960   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:44.190390   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:01:44.190459   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:01:44.209956   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:01:44.210020   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:01:44.229611   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:01:44.229678   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:01:44.249732   19703 provision.go:87] duration metric: took 199.867169ms to configureAuth
	I1105 10:01:44.249751   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:01:44.249884   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:44.249897   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:44.250035   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.250145   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.250227   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.250309   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.250384   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.250517   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.250642   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.250651   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:01:44.307473   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:01:44.307484   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:01:44.307570   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:01:44.307582   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.307713   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.307800   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.307896   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.307984   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.308146   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.308285   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.308329   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:01:44.374560   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:01:44.374579   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.374715   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.374811   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.374905   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.374997   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.375155   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.375292   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.375306   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:01:45.916909   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:01:45.916923   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:01:45.916928   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetURL
	I1105 10:01:45.917079   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:01:45.917088   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:01:45.917094   19703 client.go:171] duration metric: took 13.877421847s to LocalClient.Create
	I1105 10:01:45.917107   19703 start.go:167] duration metric: took 13.877464427s to libmachine.API.Create "ha-213000"
	I1105 10:01:45.917113   19703 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:01:45.917119   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:01:45.917129   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:45.917290   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:01:45.917304   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:45.917390   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:45.917474   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:45.917556   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:45.917651   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:45.954621   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:01:45.963284   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:01:45.963298   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:01:45.963394   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:01:45.963534   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:01:45.963541   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:01:45.963709   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:01:45.974744   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:46.007617   19703 start.go:296] duration metric: took 90.496072ms for postStartSetup
	I1105 10:01:46.007644   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:46.008278   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:46.008431   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:46.008809   19703 start.go:128] duration metric: took 14.00365458s to createHost
	I1105 10:01:46.008826   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:46.008921   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.009026   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.009114   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.009199   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.009324   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:46.009442   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:46.009449   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:01:46.065399   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829706.339878187
	
	I1105 10:01:46.065410   19703 fix.go:216] guest clock: 1730829706.339878187
	I1105 10:01:46.065415   19703 fix.go:229] Guest: 2024-11-05 10:01:46.339878187 -0800 PST Remote: 2024-11-05 10:01:46.00882 -0800 PST m=+57.574793708 (delta=331.058187ms)
	I1105 10:01:46.065424   19703 fix.go:200] guest clock delta is within tolerance: 331.058187ms
	I1105 10:01:46.065428   19703 start.go:83] releasing machines lock for "ha-213000-m02", held for 14.060329703s
	I1105 10:01:46.065445   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.065576   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:46.087717   19703 out.go:177] * Found network options:
	I1105 10:01:46.109842   19703 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:01:46.131999   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:01:46.132065   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.132924   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.133189   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.133350   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:01:46.133419   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:01:46.133425   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:01:46.133511   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:01:46.133525   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:46.133564   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.133658   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.133724   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.133792   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.133856   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.133913   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.133985   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:46.134067   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:01:46.166296   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:01:46.166372   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:01:46.210783   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:01:46.210798   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:46.210864   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:46.225606   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:01:46.234567   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:01:46.243434   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:01:46.243498   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:01:46.252254   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:46.260991   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:01:46.269783   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:46.278460   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:01:46.287315   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:01:46.296362   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:01:46.305259   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:01:46.314314   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:01:46.322151   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:01:46.322203   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:01:46.331333   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:01:46.339411   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:46.437814   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:01:46.456976   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:46.457074   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:01:46.473512   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:46.487971   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:01:46.501912   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:46.512646   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:46.523147   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:01:46.545158   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:46.555335   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:46.570377   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:01:46.573322   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:01:46.580455   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:01:46.594087   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:01:46.688786   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:01:46.806047   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:01:46.806077   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:01:46.821570   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:46.919986   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:49.283369   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.363383588s)
	I1105 10:01:49.283454   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:01:49.293731   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:01:49.306548   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:49.317994   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:01:49.421101   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:01:49.523439   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:49.641875   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:01:49.655594   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:49.667711   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:49.787298   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:01:49.845991   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:01:49.846096   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:01:49.851066   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:01:49.851131   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:01:49.854437   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:01:49.883943   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:01:49.884034   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:49.900385   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:49.958496   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:01:50.015373   19703 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:01:50.036835   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:50.037289   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:01:50.041454   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:50.051908   19703 mustload.go:65] Loading cluster: ha-213000
	I1105 10:01:50.052063   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:50.052290   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:50.052318   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:50.063943   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57675
	I1105 10:01:50.064254   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:50.064622   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:50.064639   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:50.064857   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:50.064943   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:50.065040   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:50.065101   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:50.066239   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:50.066502   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:50.066538   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:50.077511   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57677
	I1105 10:01:50.077820   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:50.078153   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:50.078165   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:50.078378   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:50.078491   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:50.078597   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:01:50.078603   19703 certs.go:194] generating shared ca certs ...
	I1105 10:01:50.078614   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.078762   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:01:50.078814   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:01:50.078823   19703 certs.go:256] generating profile certs ...
	I1105 10:01:50.078932   19703 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:01:50.078952   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:01:50.078965   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:01:50.259675   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 ...
	I1105 10:01:50.259696   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614: {Name:mk88a6c605d32cdc699192a3b9f65c36d4d8999e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.260061   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614 ...
	I1105 10:01:50.260070   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614: {Name:mk09cfa8a7c58367d4fd503cdc6b46cb11ab646e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.260325   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:01:50.260527   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:01:50.260749   19703 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:01:50.260759   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:01:50.260781   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:01:50.260800   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:01:50.260819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:01:50.260838   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:01:50.260856   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:01:50.260874   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:01:50.260893   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:01:50.260970   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:01:50.261007   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:01:50.261015   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:01:50.261049   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:01:50.261078   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:01:50.261106   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:01:50.261167   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:50.261198   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.261219   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.261239   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.261271   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:50.261425   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:50.261537   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:50.261637   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:50.261724   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:50.291214   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:01:50.295029   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:01:50.304845   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:01:50.308184   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:01:50.316387   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:01:50.319596   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:01:50.337836   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:01:50.342163   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:01:50.351433   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:01:50.354494   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:01:50.362620   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:01:50.365669   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:01:50.373430   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:01:50.393167   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:01:50.412948   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:01:50.433246   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:01:50.454659   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 10:01:50.476244   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:01:50.497382   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:01:50.518008   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:01:50.539746   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:01:50.559947   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:01:50.580509   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:01:50.600953   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:01:50.615539   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:01:50.631388   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:01:50.646013   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:01:50.660722   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:01:50.675559   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:01:50.690207   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:01:50.705146   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:01:50.709852   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:01:50.720235   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.724017   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.724092   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.728732   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:01:50.738807   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:01:50.748691   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.752394   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.752467   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.757374   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:01:50.767503   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:01:50.777410   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.781104   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.781174   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.785786   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:01:50.795583   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:01:50.798943   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:01:50.798984   19703 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:01:50.799037   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:01:50.799054   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:01:50.799124   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:01:50.814293   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:01:50.814341   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:01:50.814416   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:50.823146   19703 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 10:01:50.823227   19703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:50.832606   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl
	I1105 10:01:50.832611   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 10:01:50.832606   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 10:01:53.130716   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:01:53.131358   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:01:53.135037   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 10:01:53.135071   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 10:01:53.503819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:01:53.503977   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:01:53.507746   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 10:01:53.507776   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 10:01:54.433884   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:01:54.445780   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:01:54.449919   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:01:54.453259   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 10:01:54.453278   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 10:01:54.698228   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:01:54.705621   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:01:54.719140   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:01:54.732989   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:01:54.747064   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:01:54.750001   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:54.764611   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:54.865217   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:01:54.881672   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:54.881968   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:54.881993   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:54.911777   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57704
	I1105 10:01:54.912101   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:54.912484   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:54.912500   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:54.912742   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:54.912836   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:54.912927   19703 start.go:317] joinCluster: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:01:54.913009   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 10:01:54.913021   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:54.913107   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:54.913205   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:54.913312   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:54.913396   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:55.045976   19703 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:55.046004   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5r3zgt.rf02xhd5n0rx0515 --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I1105 10:02:51.053865   19703 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5r3zgt.rf02xhd5n0rx0515 --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (56.008348234s)
	I1105 10:02:51.053890   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 10:02:51.498162   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000-m02 minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=false
	I1105 10:02:51.582922   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-213000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 10:02:51.686566   19703 start.go:319] duration metric: took 56.774150217s to joinCluster
	I1105 10:02:51.686607   19703 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:02:51.686840   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:02:51.711458   19703 out.go:177] * Verifying Kubernetes components...
	I1105 10:02:51.753270   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:02:52.031063   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:02:52.044378   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:02:52.044645   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:02:52.044691   19703 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:02:52.044863   19703 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:02:52.044923   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:52.044928   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:52.044934   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:52.044938   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:52.058081   19703 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 10:02:52.545656   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:52.545674   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:52.545681   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:52.545696   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:52.555893   19703 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 10:02:53.045024   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:53.045046   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:53.045052   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:53.045055   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:53.048645   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:02:53.546069   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:53.546084   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:53.546091   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:53.546093   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:53.547996   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:54.045237   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:54.045257   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:54.045267   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:54.045272   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:54.047820   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:54.048291   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:54.545760   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:54.545775   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:54.545782   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:54.545785   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:54.547819   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:55.044978   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:55.044993   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:55.045001   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:55.045004   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:55.046984   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:55.545354   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:55.545368   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:55.545375   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:55.545378   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:55.548038   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.045218   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:56.045245   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:56.045253   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:56.045268   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:56.047334   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.544996   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:56.545011   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:56.545018   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:56.545021   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:56.547178   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.547527   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:57.045514   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:57.045534   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:57.045542   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:57.045547   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:57.047484   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:57.544988   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:57.545015   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:57.545024   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:57.545051   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:57.547803   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.045000   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:58.045015   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:58.045024   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:58.045028   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:58.047832   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.546341   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:58.546364   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:58.546375   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:58.546382   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:58.549120   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.549600   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:59.046258   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:59.046274   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:59.046280   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:59.046284   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:59.048819   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:59.545918   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:59.545955   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:59.545965   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:59.545970   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:59.548272   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:00.046889   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:00.046941   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:00.046952   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:00.046957   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:00.049445   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:00.545645   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:00.545688   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:00.545698   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:00.545703   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:00.547837   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:01.045437   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:01.045458   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:01.045487   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:01.045493   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:01.048085   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:01.048344   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:01.545645   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:01.545659   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:01.545667   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:01.545671   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:01.547691   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:02.045598   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:02.045646   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:02.045658   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:02.045664   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:02.048807   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:02.546565   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:02.546592   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:02.546604   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:02.546612   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:02.549476   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:03.045855   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:03.045870   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:03.045879   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:03.045884   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:03.048183   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:03.048643   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:03.545655   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:03.545672   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:03.545680   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:03.545685   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:03.548253   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:04.045787   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:04.045799   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:04.045804   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:04.045808   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:04.047953   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:04.545151   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:04.545177   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:04.545189   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:04.545194   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:04.548344   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:05.045476   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:05.045491   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:05.045499   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:05.045505   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:05.047869   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:05.545655   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:05.545683   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:05.545690   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:05.545695   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:05.547764   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:05.548130   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:06.044875   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:06.044918   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:06.044928   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:06.044935   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:06.048311   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:06.544896   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:06.544908   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:06.544914   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:06.544918   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:06.547153   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:07.046736   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:07.046760   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:07.046771   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:07.046776   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:07.053742   19703 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:03:07.545035   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:07.545055   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:07.545065   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:07.545072   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:07.548090   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:07.548456   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:08.045232   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:08.045266   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:08.045277   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:08.045283   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:08.047484   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:08.546527   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:08.546544   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:08.546553   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:08.546557   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:08.549226   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:09.044874   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:09.044886   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:09.044892   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:09.044895   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:09.047137   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:09.544845   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:09.544883   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:09.544894   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:09.544900   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:09.547114   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.045255   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.045274   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.045282   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.045287   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.047408   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.047748   19703 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:03:10.047761   19703 node_ready.go:38] duration metric: took 18.003041287s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:03:10.047767   19703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:03:10.047809   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:10.047815   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.047821   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.047824   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.050396   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.054843   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.054888   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:03:10.054893   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.054898   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.054902   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.056627   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.057017   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.057024   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.057030   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.057034   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.058541   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.058972   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.058981   19703 pod_ready.go:82] duration metric: took 4.12715ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.058987   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.059026   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:03:10.059031   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.059036   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.059040   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.060406   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.060936   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.060944   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.060949   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.060952   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.062259   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.062752   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.062760   19703 pod_ready.go:82] duration metric: took 3.768625ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.062766   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.062794   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:03:10.062799   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.062804   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.062808   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.064381   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.064737   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.064744   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.064749   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.064753   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.066188   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.066657   19703 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.066666   19703 pod_ready.go:82] duration metric: took 3.89498ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.066671   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.066702   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:03:10.066707   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.066716   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.066720   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.068481   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.068975   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.068982   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.068988   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.068993   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.070316   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.070596   19703 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.070604   19703 pod_ready.go:82] duration metric: took 3.927721ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.070613   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.246069   19703 request.go:632] Waited for 175.411357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:03:10.246135   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:03:10.246143   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.246150   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.246157   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.248509   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.445735   19703 request.go:632] Waited for 196.830174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.445773   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.445810   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.445820   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.445825   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.447694   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.448028   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.448037   19703 pod_ready.go:82] duration metric: took 377.422873ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.448044   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.645309   19703 request.go:632] Waited for 197.231613ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:03:10.645351   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:03:10.645387   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.645393   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.645398   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.647385   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.845570   19703 request.go:632] Waited for 197.573578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.845611   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.845619   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.845632   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.845641   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.848369   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.848765   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.848776   19703 pod_ready.go:82] duration metric: took 400.729678ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.848783   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.046537   19703 request.go:632] Waited for 197.717054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:03:11.046604   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:03:11.046612   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.046621   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.046628   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.048951   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:11.246799   19703 request.go:632] Waited for 197.304848ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:11.246915   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:11.246922   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.246932   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.246938   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.249732   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:11.250168   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:11.250177   19703 pod_ready.go:82] duration metric: took 401.392962ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.250184   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.446428   19703 request.go:632] Waited for 196.056309ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:03:11.446480   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:03:11.446489   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.446499   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.446505   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.449627   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:11.646969   19703 request.go:632] Waited for 196.797375ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:11.647076   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:11.647092   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.647104   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.647110   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.650574   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:11.650948   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:11.650974   19703 pod_ready.go:82] duration metric: took 400.789912ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.650982   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.845660   19703 request.go:632] Waited for 194.624945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:03:11.845696   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:03:11.845702   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.845710   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.845715   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.848177   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.045843   19703 request.go:632] Waited for 197.194177ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:12.045884   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:12.045890   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.045898   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.045904   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.048353   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.048691   19703 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.048700   19703 pod_ready.go:82] duration metric: took 397.716056ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.048712   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.247441   19703 request.go:632] Waited for 198.639286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:03:12.247526   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:03:12.247535   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.247546   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.247553   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.251277   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:12.445804   19703 request.go:632] Waited for 193.98688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.445866   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.445879   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.445891   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.445909   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.448985   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:12.449660   19703 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.449672   19703 pod_ready.go:82] duration metric: took 400.957346ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.449680   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.646023   19703 request.go:632] Waited for 196.29617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:03:12.646138   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:03:12.646148   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.646156   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.646160   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.648881   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.846423   19703 request.go:632] Waited for 197.041377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.846478   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.846486   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.846495   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.846500   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.849237   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.849526   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.849536   19703 pod_ready.go:82] duration metric: took 399.853481ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.849543   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:13.046888   19703 request.go:632] Waited for 197.276485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:03:13.046931   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:03:13.046938   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.046973   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.046978   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.049651   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:13.246632   19703 request.go:632] Waited for 196.567235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:13.246683   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:13.246692   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.246727   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.246737   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.249732   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:13.250368   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:13.250378   19703 pod_ready.go:82] duration metric: took 400.834283ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:13.250385   19703 pod_ready.go:39] duration metric: took 3.20263718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:03:13.250407   19703 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:03:13.250476   19703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:03:13.263389   19703 api_server.go:72] duration metric: took 21.576959393s to wait for apiserver process to appear ...
	I1105 10:03:13.263406   19703 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:03:13.263422   19703 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:03:13.267595   19703 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:03:13.267641   19703 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:03:13.267649   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.267658   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.267666   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.268160   19703 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:03:13.268232   19703 api_server.go:141] control plane version: v1.31.2
	I1105 10:03:13.268245   19703 api_server.go:131] duration metric: took 4.83504ms to wait for apiserver health ...
	I1105 10:03:13.268250   19703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:03:13.447320   19703 request.go:632] Waited for 179.029832ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.447393   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.447401   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.447409   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.447414   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.451090   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.454646   19703 system_pods.go:59] 17 kube-system pods found
	I1105 10:03:13.454662   19703 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:03:13.454667   19703 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:03:13.454671   19703 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:03:13.454673   19703 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:03:13.454676   19703 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:03:13.454679   19703 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:03:13.454681   19703 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:03:13.454685   19703 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:03:13.454688   19703 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:03:13.454699   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:03:13.454702   19703 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:03:13.454707   19703 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:03:13.454710   19703 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:03:13.454712   19703 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:03:13.454715   19703 system_pods.go:61] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:03:13.454718   19703 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:03:13.454721   19703 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:03:13.454725   19703 system_pods.go:74] duration metric: took 186.473341ms to wait for pod list to return data ...
	I1105 10:03:13.454731   19703 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:03:13.645590   19703 request.go:632] Waited for 190.785599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:03:13.645629   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:03:13.645636   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.645645   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.645651   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.648706   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.648833   19703 default_sa.go:45] found service account: "default"
	I1105 10:03:13.648842   19703 default_sa.go:55] duration metric: took 194.109049ms for default service account to be created ...
	I1105 10:03:13.648848   19703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:03:13.845301   19703 request.go:632] Waited for 196.413293ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.845347   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.845354   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.845362   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.845368   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.849295   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.853094   19703 system_pods.go:86] 17 kube-system pods found
	I1105 10:03:13.853105   19703 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:03:13.853109   19703 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:03:13.853113   19703 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:03:13.853116   19703 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:03:13.853122   19703 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:03:13.853125   19703 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:03:13.853128   19703 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:03:13.853131   19703 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:03:13.853133   19703 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:03:13.853139   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:03:13.853145   19703 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:03:13.853147   19703 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:03:13.853150   19703 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:03:13.853153   19703 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:03:13.853155   19703 system_pods.go:89] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:03:13.853158   19703 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:03:13.853161   19703 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:03:13.853165   19703 system_pods.go:126] duration metric: took 204.31519ms to wait for k8s-apps to be running ...
	I1105 10:03:13.853173   19703 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:03:13.853242   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:03:13.864800   19703 system_svc.go:56] duration metric: took 11.624062ms WaitForService to wait for kubelet
	I1105 10:03:13.864814   19703 kubeadm.go:582] duration metric: took 22.178392392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:03:13.864830   19703 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:03:14.047134   19703 request.go:632] Waited for 182.24401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:03:14.047270   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:03:14.047286   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:14.047300   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:14.047306   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:14.051327   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:03:14.051979   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:03:14.051996   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:03:14.052008   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:03:14.052011   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:03:14.052014   19703 node_conditions.go:105] duration metric: took 187.182073ms to run NodePressure ...
	I1105 10:03:14.052022   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:03:14.052040   19703 start.go:255] writing updated cluster config ...
	I1105 10:03:14.073950   19703 out.go:201] 
	I1105 10:03:14.095736   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:14.095829   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:14.117446   19703 out.go:177] * Starting "ha-213000-m03" control-plane node in "ha-213000" cluster
	I1105 10:03:14.159672   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:03:14.159707   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:03:14.159953   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:03:14.159973   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:03:14.160101   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:14.161238   19703 start.go:360] acquireMachinesLock for ha-213000-m03: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:03:14.161402   19703 start.go:364] duration metric: took 132.038µs to acquireMachinesLock for "ha-213000-m03"
	I1105 10:03:14.161444   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:03:14.161537   19703 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I1105 10:03:14.203378   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:03:14.203509   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:14.203540   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:14.215464   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57709
	I1105 10:03:14.215800   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:14.216175   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:14.216187   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:14.216413   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:14.216532   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:14.216639   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:14.216755   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:03:14.216774   19703 client.go:168] LocalClient.Create starting
	I1105 10:03:14.216804   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:03:14.216876   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:03:14.216886   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:03:14.216927   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:03:14.216976   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:03:14.216986   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:03:14.217000   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:03:14.217004   19703 main.go:141] libmachine: (ha-213000-m03) Calling .PreCreateCheck
	I1105 10:03:14.217109   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:14.217166   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:14.217654   19703 main.go:141] libmachine: Creating machine...
	I1105 10:03:14.217662   19703 main.go:141] libmachine: (ha-213000-m03) Calling .Create
	I1105 10:03:14.217732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:14.217901   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.217732   19773 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:03:14.217969   19703 main.go:141] libmachine: (ha-213000-m03) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:03:14.490580   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.490490   19773 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa...
	I1105 10:03:14.554451   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.554363   19773 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk...
	I1105 10:03:14.554467   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Writing magic tar header
	I1105 10:03:14.554475   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Writing SSH key tar header
	I1105 10:03:14.555306   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.555244   19773 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03 ...
	I1105 10:03:15.030518   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:15.030575   19703 main.go:141] libmachine: (ha-213000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid
	I1105 10:03:15.030627   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Using UUID 9e834d88-ec2a-4703-a798-2d165259ce86
	I1105 10:03:15.063985   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Generated MAC 06:83:5c:e9:cb:34
	I1105 10:03:15.064010   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:03:15.064043   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9e834d88-ec2a-4703-a798-2d165259ce86", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:03:15.064075   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9e834d88-ec2a-4703-a798-2d165259ce86", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:03:15.064114   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9e834d88-ec2a-4703-a798-2d165259ce86", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:03:15.064146   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9e834d88-ec2a-4703-a798-2d165259ce86 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:03:15.064163   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:03:15.067111   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Pid is 19776
	I1105 10:03:15.067572   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 0
	I1105 10:03:15.067585   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:15.067601   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:15.068753   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:15.068832   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:15.068842   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:15.068849   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:15.068858   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:15.068869   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:15.068885   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:15.077682   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:03:15.086555   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:03:15.087707   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:03:15.087735   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:03:15.087751   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:03:15.087769   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:03:15.487970   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:03:15.487985   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:03:15.602906   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:03:15.602926   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:03:15.602934   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:03:15.602939   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:03:15.603764   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:03:15.603775   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:03:17.070391   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 1
	I1105 10:03:17.070406   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:17.070484   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:17.071430   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:17.071487   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:17.071507   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:17.071517   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:17.071526   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:17.071533   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:17.071540   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:19.071643   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 2
	I1105 10:03:19.071657   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:19.071732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:19.072732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:19.072790   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:19.072797   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:19.072820   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:19.072832   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:19.072839   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:19.072847   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:21.074196   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 3
	I1105 10:03:21.074212   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:21.074292   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:21.075239   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:21.075306   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:21.075318   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:21.075336   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:21.075342   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:21.075348   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:21.075356   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:21.396531   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:03:21.396580   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:03:21.396612   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:03:21.420738   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:03:23.075524   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 4
	I1105 10:03:23.075538   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:23.075648   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:23.076609   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:23.076667   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:23.076676   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:23.076684   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:23.076690   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:23.076697   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:23.076705   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:25.077448   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 5
	I1105 10:03:25.077468   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:25.077588   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:25.078838   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:25.078950   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1105 10:03:25.078960   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a6bfc}
	I1105 10:03:25.078966   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found match: 06:83:5c:e9:cb:34
	I1105 10:03:25.078970   19703 main.go:141] libmachine: (ha-213000-m03) DBG | IP: 192.169.0.7
	I1105 10:03:25.079034   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:25.079648   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:25.079753   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:25.079858   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:03:25.079867   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:03:25.079968   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:25.080027   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:25.081028   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:03:25.081037   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:03:25.081043   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:03:25.081047   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:25.081134   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:25.081211   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:25.081299   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:25.081396   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:25.081991   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:25.082321   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:25.082330   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:03:26.133521   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:03:26.133535   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:03:26.133540   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.133696   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.133825   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.133956   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.134044   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.134222   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.134364   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.134372   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:03:26.183718   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:03:26.183767   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:03:26.183774   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:03:26.183779   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.183908   19703 buildroot.go:166] provisioning hostname "ha-213000-m03"
	I1105 10:03:26.183917   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.184015   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.184096   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.184192   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.184276   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.184357   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.184497   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.184635   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.184643   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m03 && echo "ha-213000-m03" | sudo tee /etc/hostname
	I1105 10:03:26.245764   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m03
	
	I1105 10:03:26.245779   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.245911   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.246034   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.246135   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.246225   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.246371   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.246514   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.246525   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:03:26.304895   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:03:26.304911   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:03:26.304922   19703 buildroot.go:174] setting up certificates
	I1105 10:03:26.304929   19703 provision.go:84] configureAuth start
	I1105 10:03:26.304936   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.305070   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:26.305166   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.305256   19703 provision.go:143] copyHostCerts
	I1105 10:03:26.305284   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:03:26.305330   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:03:26.305336   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:03:26.305479   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:03:26.305706   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:03:26.305741   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:03:26.305746   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:03:26.305833   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:03:26.305989   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:03:26.306040   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:03:26.306045   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:03:26.306127   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:03:26.306297   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m03 san=[127.0.0.1 192.169.0.7 ha-213000-m03 localhost minikube]
	I1105 10:03:26.464060   19703 provision.go:177] copyRemoteCerts
	I1105 10:03:26.464124   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:03:26.464140   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.464292   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.464393   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.464474   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.464559   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:26.496436   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:03:26.496516   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:03:26.516600   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:03:26.516672   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:03:26.535607   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:03:26.535680   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:03:26.556861   19703 provision.go:87] duration metric: took 251.926291ms to configureAuth
	I1105 10:03:26.556882   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:03:26.557331   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:26.557344   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:26.557488   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.557585   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.557665   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.557758   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.557840   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.557971   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.558096   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.558106   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:03:26.608963   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:03:26.608976   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:03:26.609053   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:03:26.609068   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.609212   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.609317   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.609404   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.609483   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.609628   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.609762   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.609808   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:03:26.670604   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:03:26.670621   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.670766   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.670854   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.670959   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.671050   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.671201   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.671337   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.671349   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:03:28.299545   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:03:28.299559   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:03:28.299574   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetURL
	I1105 10:03:28.299721   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:03:28.299730   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:03:28.299735   19703 client.go:171] duration metric: took 14.083083071s to LocalClient.Create
	I1105 10:03:28.299751   19703 start.go:167] duration metric: took 14.083123931s to libmachine.API.Create "ha-213000"
	I1105 10:03:28.299756   19703 start.go:293] postStartSetup for "ha-213000-m03" (driver="hyperkit")
	I1105 10:03:28.299763   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:03:28.299775   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.299931   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:03:28.299943   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.300030   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.300114   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.300191   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.300269   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:28.335699   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:03:28.339827   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:03:28.339839   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:03:28.339952   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:03:28.340166   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:03:28.340173   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:03:28.340432   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:03:28.353898   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:03:28.384126   19703 start.go:296] duration metric: took 84.362542ms for postStartSetup
	I1105 10:03:28.384153   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:28.384834   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:28.385024   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:28.385403   19703 start.go:128] duration metric: took 14.223987778s to createHost
	I1105 10:03:28.385418   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.385511   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.385585   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.385675   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.385752   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.385869   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:28.385999   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:28.386006   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:03:28.435792   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829808.714335766
	
	I1105 10:03:28.435806   19703 fix.go:216] guest clock: 1730829808.714335766
	I1105 10:03:28.435811   19703 fix.go:229] Guest: 2024-11-05 10:03:28.714335766 -0800 PST Remote: 2024-11-05 10:03:28.385413 -0800 PST m=+159.952313720 (delta=328.922766ms)
	I1105 10:03:28.435825   19703 fix.go:200] guest clock delta is within tolerance: 328.922766ms
	I1105 10:03:28.435829   19703 start.go:83] releasing machines lock for "ha-213000-m03", held for 14.274546252s
	I1105 10:03:28.435845   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.435975   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:28.463026   19703 out.go:177] * Found network options:
	I1105 10:03:28.524451   19703 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:03:28.550710   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:03:28.550742   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:03:28.550759   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.551499   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.551696   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	W1105 10:03:28.551855   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:03:28.551871   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:03:28.551938   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:03:28.551950   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.552047   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.552054   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:03:28.552073   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.552162   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.552174   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.552281   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.552298   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.552403   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.552430   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:28.552508   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	W1105 10:03:28.623667   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:03:28.623763   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:03:28.636396   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:03:28.636411   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:03:28.636477   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:03:28.651269   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:03:28.659803   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:03:28.668221   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:03:28.668301   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:03:28.676635   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:03:28.684733   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:03:28.693062   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:03:28.701350   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:03:28.709600   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:03:28.717536   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:03:28.725790   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:03:28.734256   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:03:28.741810   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:03:28.741868   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:03:28.750498   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:03:28.757777   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:28.848477   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:03:28.867603   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:03:28.867693   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:03:28.882469   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:03:28.893733   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:03:28.910872   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:03:28.921618   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:03:28.931860   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:03:28.955674   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:03:28.966135   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:03:28.981687   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:03:28.984719   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:03:28.992276   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:03:29.007094   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:03:29.103508   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:03:29.207614   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:03:29.207637   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:03:29.221804   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:29.326678   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:03:31.637809   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31112896s)
	I1105 10:03:31.637894   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:03:31.648270   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:03:31.661112   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:03:31.671447   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:03:31.763823   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:03:31.864111   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:31.960037   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:03:31.972710   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:03:31.983457   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:32.073613   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:03:32.131634   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:03:32.132384   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:03:32.136690   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:03:32.136768   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:03:32.139750   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:03:32.167666   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:03:32.167752   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:03:32.185621   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:03:32.230649   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:03:32.272942   19703 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:03:32.316102   19703 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1105 10:03:32.337095   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:32.337555   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:03:32.342111   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:03:32.352735   19703 mustload.go:65] Loading cluster: ha-213000
	I1105 10:03:32.352915   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:32.353165   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:32.353188   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:32.364602   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57732
	I1105 10:03:32.364908   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:32.365258   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:32.365274   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:32.365482   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:32.365613   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:03:32.365706   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:32.365791   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:03:32.366950   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:03:32.367212   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:32.367238   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:32.378822   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57734
	I1105 10:03:32.379151   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:32.379474   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:32.379486   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:32.379723   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:32.379829   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:03:32.379937   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.7
	I1105 10:03:32.379942   19703 certs.go:194] generating shared ca certs ...
	I1105 10:03:32.379956   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.380143   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:03:32.380237   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:03:32.380246   19703 certs.go:256] generating profile certs ...
	I1105 10:03:32.380342   19703 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:03:32.380361   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9
	I1105 10:03:32.380396   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1105 10:03:32.531495   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 ...
	I1105 10:03:32.531519   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9: {Name:mked1b883793443cd41069aa04846ce3d13e3cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.531897   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9 ...
	I1105 10:03:32.531907   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9: {Name:mkc6838eeb283dd1eaf268f9b1d512c474d2ec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.532158   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:03:32.532364   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:03:32.532662   19703 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:03:32.532672   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:03:32.532701   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:03:32.532722   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:03:32.532741   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:03:32.532759   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:03:32.532779   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:03:32.532797   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:03:32.532819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:03:32.532921   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:03:32.532977   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:03:32.532985   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:03:32.533022   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:03:32.533055   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:03:32.533086   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:03:32.533156   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:03:32.533192   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:03:32.533220   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.533241   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.533273   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:03:32.533416   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:03:32.533504   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:03:32.533579   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:03:32.533666   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:03:32.562870   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:03:32.566125   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:03:32.574941   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:03:32.577997   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:03:32.590640   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:03:32.593905   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:03:32.603441   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:03:32.607210   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:03:32.616800   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:03:32.620077   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:03:32.629598   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:03:32.632721   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:03:32.641637   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:03:32.661020   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:03:32.681195   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:03:32.700777   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:03:32.719964   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 10:03:32.740252   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:03:32.759642   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:03:32.778570   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:03:32.798835   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:03:32.818449   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:03:32.837230   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:03:32.856822   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:03:32.870143   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:03:32.883780   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:03:32.897915   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:03:32.911578   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:03:32.925103   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:03:32.938796   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:03:32.953077   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:03:32.957362   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:03:32.966865   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.970193   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.970241   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.974304   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:03:32.983690   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:03:32.993172   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.996898   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.996967   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:03:33.001371   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:03:33.010757   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:03:33.020281   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.023901   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.023960   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.028229   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:03:33.038224   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:03:33.041604   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:03:33.041640   19703 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1105 10:03:33.041693   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:03:33.041717   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:03:33.041764   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:03:33.054458   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:03:33.054499   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:03:33.054566   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:03:33.063806   19703 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 10:03:33.063883   19703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 10:03:33.072691   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 10:03:33.072692   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 10:03:33.072707   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:03:33.072712   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:03:33.072691   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 10:03:33.072776   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:03:33.072833   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:03:33.072833   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:03:33.084670   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:03:33.084705   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 10:03:33.084732   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 10:03:33.084803   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 10:03:33.084830   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 10:03:33.084849   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:03:33.112916   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 10:03:33.112953   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 10:03:33.638088   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:03:33.646375   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:03:33.662178   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:03:33.676109   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:03:33.690081   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:03:33.693205   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:03:33.703729   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:33.801135   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:03:33.817719   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:03:33.818043   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:33.818070   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:33.829754   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57737
	I1105 10:03:33.830093   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:33.830434   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:33.830444   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:33.830649   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:33.830744   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:03:33.830850   19703 start.go:317] joinCluster: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:03:33.830949   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 10:03:33.830963   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:03:33.831038   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:03:33.831160   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:03:33.831264   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:03:33.831351   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:03:33.915396   19703 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:03:33.915427   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3af4oc.mrofw1iihstmy2lp --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m03 --control-plane --apiserver-advertise-address=192.169.0.7 --apiserver-bind-port=8443"
	I1105 10:04:05.463364   19703 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3af4oc.mrofw1iihstmy2lp --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m03 --control-plane --apiserver-advertise-address=192.169.0.7 --apiserver-bind-port=8443": (31.548185064s)
	I1105 10:04:05.463394   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 10:04:05.926039   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000-m03 minikube.k8s.io/updated_at=2024_11_05T10_04_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=false
	I1105 10:04:06.005817   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-213000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 10:04:06.089769   19703 start.go:319] duration metric: took 32.259206586s to joinCluster
	I1105 10:04:06.089835   19703 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:04:06.090023   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:04:06.144884   19703 out.go:177] * Verifying Kubernetes components...
	I1105 10:04:06.218462   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:04:06.491890   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:04:06.522724   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:04:06.522981   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:04:06.523027   19703 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:04:06.523216   19703 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m03" to be "Ready" ...
	I1105 10:04:06.523272   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:06.523278   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:06.523284   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:06.523289   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:06.525762   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:07.024768   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:07.024785   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:07.024792   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:07.024796   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:07.026838   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:07.523802   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:07.523816   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:07.523823   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:07.523827   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:07.525952   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.024543   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:08.024558   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:08.024565   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:08.024567   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:08.026818   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.523406   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:08.523421   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:08.523428   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:08.523431   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:08.525613   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.526028   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:09.023489   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:09.023507   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:09.023515   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:09.023518   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:09.025718   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:09.524490   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:09.524507   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:09.524536   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:09.524542   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:09.526550   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:10.024748   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:10.024763   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:10.024770   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:10.024773   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:10.026854   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:10.523405   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:10.523428   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:10.523434   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:10.523438   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:10.525879   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:10.526310   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:11.024586   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:11.024601   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:11.024608   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:11.024611   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:11.026801   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:11.524591   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:11.524616   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:11.524627   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:11.524633   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:11.528710   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:12.023846   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:12.023871   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:12.023892   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:12.023899   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:12.026343   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:12.523660   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:12.523678   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:12.523687   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:12.523692   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:12.526168   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:12.526545   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:13.024493   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:13.024549   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:13.024558   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:13.024562   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:13.026612   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:13.523333   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:13.523377   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:13.523386   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:13.523391   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:13.526060   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.023778   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:14.023813   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:14.023821   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:14.023826   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:14.026277   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.524791   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:14.524807   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:14.524814   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:14.524818   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:14.526944   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.527389   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:15.024235   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:15.024251   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:15.024257   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:15.024261   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:15.026360   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:15.523297   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:15.523315   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:15.523340   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:15.523344   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:15.525650   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.024088   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:16.024104   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:16.024111   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:16.024114   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:16.026234   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.524762   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:16.524781   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:16.524790   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:16.524794   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:16.527186   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.527528   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:17.024111   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:17.024127   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:17.024133   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:17.024137   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:17.026641   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:17.523462   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:17.523498   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:17.523506   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:17.523509   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:17.525790   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:18.023254   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:18.023271   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:18.023277   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:18.023280   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:18.025709   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:18.523323   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:18.523337   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:18.523343   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:18.523347   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:18.526016   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:19.023466   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:19.023481   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:19.023498   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:19.023501   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:19.026019   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:19.026436   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:19.523232   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:19.523250   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:19.523258   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:19.523262   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:19.525574   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:20.025183   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:20.025202   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:20.025211   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:20.025217   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:20.027796   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:20.524157   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:20.524199   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:20.524209   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:20.524214   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:20.526298   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:21.023312   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:21.023328   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:21.023335   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:21.023338   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:21.025776   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:21.525084   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:21.525110   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:21.525123   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:21.525129   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:21.528173   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:21.528632   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:22.023245   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.023263   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.023272   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.023276   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.025668   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.524560   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.524575   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.524580   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.524582   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.526635   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.527110   19703 node_ready.go:49] node "ha-213000-m03" has status "Ready":"True"
	I1105 10:04:22.527120   19703 node_ready.go:38] duration metric: took 16.004036788s for node "ha-213000-m03" to be "Ready" ...
	I1105 10:04:22.527128   19703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:04:22.527166   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:22.527172   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.527177   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.527182   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.533505   19703 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:04:22.539225   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.539271   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:04:22.539277   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.539283   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.539289   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.541288   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.541882   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.541890   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.541895   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.541898   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.543858   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.544129   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.544138   19703 pod_ready.go:82] duration metric: took 4.901387ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.544145   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.544181   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:04:22.544186   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.544191   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.544195   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.545938   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.546421   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.546429   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.546436   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.546439   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.548600   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.548988   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.548997   19703 pod_ready.go:82] duration metric: took 4.847138ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.549007   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.549053   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:04:22.549059   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.549065   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.549067   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.550912   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.551584   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.551591   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.551597   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.551600   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.553276   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.553666   19703 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.553676   19703 pod_ready.go:82] duration metric: took 4.662923ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.553683   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.553721   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:04:22.553726   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.553732   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.553735   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.555620   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.556112   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:22.556119   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.556124   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.556128   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.557964   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.558427   19703 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.558437   19703 pod_ready.go:82] duration metric: took 4.748625ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.558444   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.725676   19703 request.go:632] Waited for 167.192719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:22.725734   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:22.725741   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.725750   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.725757   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.728337   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.924860   19703 request.go:632] Waited for 196.058895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.925006   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.925029   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.925044   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.925054   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.929161   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:23.125347   19703 request.go:632] Waited for 65.258433ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.125410   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.125417   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.125424   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.125429   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.128075   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.326631   19703 request.go:632] Waited for 198.115235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.326702   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.326713   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.326721   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.326726   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.329257   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.559388   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.559410   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.559419   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.559423   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.561604   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.724668   19703 request.go:632] Waited for 162.701435ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.724727   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.724733   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.724740   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.724746   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.726755   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:24.059136   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:24.059155   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.059188   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.059194   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.061686   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:24.125899   19703 request.go:632] Waited for 63.704238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:24.126005   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:24.126016   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.126028   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.126034   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.129471   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:24.129800   19703 pod_ready.go:93] pod "etcd-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.129809   19703 pod_ready.go:82] duration metric: took 1.571374275s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.129820   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.325915   19703 request.go:632] Waited for 196.033511ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:04:24.326035   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:04:24.326046   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.326057   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.326064   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.329258   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:24.525894   19703 request.go:632] Waited for 195.976303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:24.525950   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:24.525957   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.525965   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.525970   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.531038   19703 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:04:24.531331   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.531341   19703 pod_ready.go:82] duration metric: took 401.519758ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.531348   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.725411   19703 request.go:632] Waited for 194.029144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:04:24.725452   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:04:24.725457   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.725484   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.725488   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.727336   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:24.924946   19703 request.go:632] Waited for 197.104111ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:24.925003   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:24.925010   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.925018   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.925024   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.927806   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:24.928044   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.928052   19703 pod_ready.go:82] duration metric: took 396.702505ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.928062   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.125637   19703 request.go:632] Waited for 197.516414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:04:25.125722   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:04:25.125731   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.125739   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.125747   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.128388   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.325342   19703 request.go:632] Waited for 196.567129ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:25.325384   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:25.325390   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.325430   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.325437   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.327703   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.327989   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:25.327998   19703 pod_ready.go:82] duration metric: took 399.934252ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.328005   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.526534   19703 request.go:632] Waited for 198.484556ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:04:25.526593   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:04:25.526601   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.526608   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.526614   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.528989   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.725913   19703 request.go:632] Waited for 196.422028ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:25.725987   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:25.725997   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.726008   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.726031   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.728724   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.729094   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:25.729103   19703 pod_ready.go:82] duration metric: took 401.096776ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.729112   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.924767   19703 request.go:632] Waited for 195.60365ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:04:25.924865   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:04:25.924875   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.924888   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.924896   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.928404   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:26.125908   19703 request.go:632] Waited for 196.895961ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:26.125983   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:26.125991   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.125999   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.126005   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.128293   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.128631   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.128641   19703 pod_ready.go:82] duration metric: took 399.525738ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.128647   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.324632   19703 request.go:632] Waited for 195.949532ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:04:26.324692   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:04:26.324698   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.324704   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.324708   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.326997   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.525533   19703 request.go:632] Waited for 198.105799ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.525578   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.525606   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.525616   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.525621   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.529215   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:26.529581   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.529590   19703 pod_ready.go:82] duration metric: took 400.941913ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.529597   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.726009   19703 request.go:632] Waited for 196.373053ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:04:26.726076   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:04:26.726082   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.726088   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.726092   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.728138   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.925481   19703 request.go:632] Waited for 196.839411ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.925524   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.925543   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.925555   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.925559   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.927642   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.927909   19703 pod_ready.go:93] pod "kube-proxy-5ldvg" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.927918   19703 pod_ready.go:82] duration metric: took 398.31947ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.927925   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.124645   19703 request.go:632] Waited for 196.662774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:04:27.124698   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:04:27.124740   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.124753   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.124761   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.128295   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:27.325739   19703 request.go:632] Waited for 196.804785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:27.325845   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:27.325854   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.325862   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.325867   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.328452   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.328710   19703 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:27.328719   19703 pod_ready.go:82] duration metric: took 400.792251ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.328725   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.525473   19703 request.go:632] Waited for 196.70325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:04:27.525570   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:04:27.525581   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.525593   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.525602   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.528326   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.725203   19703 request.go:632] Waited for 196.519889ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:27.725279   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:27.725285   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.725292   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.725297   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.727708   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.728131   19703 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:27.728140   19703 pod_ready.go:82] duration metric: took 399.413452ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.728146   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.924670   19703 request.go:632] Waited for 196.486132ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:04:27.924745   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:04:27.924768   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.924780   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.924785   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.926872   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.126299   19703 request.go:632] Waited for 199.099089ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:28.126434   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:28.126444   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.126455   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.126469   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.129846   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:28.130229   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.130241   19703 pod_ready.go:82] duration metric: took 402.092729ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.130250   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.325028   19703 request.go:632] Waited for 194.730914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:04:28.325106   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:04:28.325115   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.325127   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.325137   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.327834   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.524776   19703 request.go:632] Waited for 196.527612ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:28.524860   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:28.524877   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.524889   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.524897   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.528055   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:28.528583   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.528595   19703 pod_ready.go:82] duration metric: took 398.343246ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.528604   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.724665   19703 request.go:632] Waited for 196.022312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:04:28.724698   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:04:28.724704   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.724714   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.724740   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.726671   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:28.924585   19703 request.go:632] Waited for 197.482088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:28.924621   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:28.924626   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.924638   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.924641   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.927175   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.927434   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.927445   19703 pod_ready.go:82] duration metric: took 398.83876ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.927453   19703 pod_ready.go:39] duration metric: took 6.40037569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:04:28.927464   19703 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:04:28.927539   19703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:04:28.939767   19703 api_server.go:72] duration metric: took 22.850118644s to wait for apiserver process to appear ...
	I1105 10:04:28.939780   19703 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:04:28.939792   19703 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:04:28.942841   19703 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:04:28.942878   19703 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:04:28.942883   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.942889   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.942894   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.943424   19703 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:04:28.943458   19703 api_server.go:141] control plane version: v1.31.2
	I1105 10:04:28.943466   19703 api_server.go:131] duration metric: took 3.681494ms to wait for apiserver health ...
	I1105 10:04:28.943471   19703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:04:29.125181   19703 request.go:632] Waited for 181.649913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.125250   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.125257   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.125265   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.125273   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.129049   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:29.134047   19703 system_pods.go:59] 24 kube-system pods found
	I1105 10:04:29.134060   19703 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:04:29.134064   19703 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:04:29.134067   19703 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:04:29.134070   19703 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:04:29.134073   19703 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:04:29.134076   19703 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:04:29.134078   19703 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:04:29.134083   19703 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:04:29.134089   19703 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:04:29.134092   19703 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:04:29.134095   19703 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:04:29.134098   19703 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:04:29.134101   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:04:29.134103   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:04:29.134106   19703 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:04:29.134109   19703 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:04:29.134113   19703 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:04:29.134116   19703 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:04:29.134119   19703 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:04:29.134121   19703 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:04:29.134124   19703 system_pods.go:61] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:04:29.134126   19703 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:04:29.134129   19703 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:04:29.134131   19703 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:04:29.134136   19703 system_pods.go:74] duration metric: took 190.663227ms to wait for pod list to return data ...
	I1105 10:04:29.134141   19703 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:04:29.325174   19703 request.go:632] Waited for 190.972254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:04:29.325306   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:04:29.325317   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.325328   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.325334   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.328806   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:29.328877   19703 default_sa.go:45] found service account: "default"
	I1105 10:04:29.328886   19703 default_sa.go:55] duration metric: took 194.742768ms for default service account to be created ...
	I1105 10:04:29.328892   19703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:04:29.525825   19703 request.go:632] Waited for 196.894286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.525885   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.525891   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.525900   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.525906   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.530238   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:29.535151   19703 system_pods.go:86] 24 kube-system pods found
	I1105 10:04:29.535162   19703 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:04:29.535166   19703 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:04:29.535169   19703 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:04:29.535173   19703 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:04:29.535176   19703 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:04:29.535179   19703 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:04:29.535182   19703 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:04:29.535186   19703 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:04:29.535189   19703 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:04:29.535192   19703 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:04:29.535195   19703 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:04:29.535198   19703 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:04:29.535203   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:04:29.535206   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:04:29.535209   19703 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:04:29.535212   19703 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:04:29.535214   19703 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:04:29.535217   19703 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:04:29.535220   19703 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:04:29.535224   19703 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:04:29.535226   19703 system_pods.go:89] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:04:29.535229   19703 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:04:29.535232   19703 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:04:29.535236   19703 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:04:29.535241   19703 system_pods.go:126] duration metric: took 206.346852ms to wait for k8s-apps to be running ...
	I1105 10:04:29.535246   19703 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:04:29.535311   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:04:29.546979   19703 system_svc.go:56] duration metric: took 11.728241ms WaitForService to wait for kubelet
	I1105 10:04:29.546999   19703 kubeadm.go:582] duration metric: took 23.457354958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:04:29.547010   19703 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:04:29.724995   19703 request.go:632] Waited for 177.933168ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:04:29.725067   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:04:29.725074   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.725082   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.725088   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.727706   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:29.728430   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728439   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728446   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728449   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728453   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728456   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728459   19703 node_conditions.go:105] duration metric: took 181.447674ms to run NodePressure ...
	I1105 10:04:29.728466   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:04:29.728479   19703 start.go:255] writing updated cluster config ...
	I1105 10:04:29.729489   19703 ssh_runner.go:195] Run: rm -f paused
	I1105 10:04:29.979871   19703 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1105 10:04:30.017888   19703 out.go:177] * Done! kubectl is now configured to use "ha-213000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d756c554cb1804008ed0d83f76add780a56ab524ce9ad727444994833786ca2/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14a06ee63dae33c8dba35c6c5dae9567da2ca60899210abc9f317c0880b139fc/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc924e17f3bb751fc1e52153e5ef02a65f98bbb979139ce33eaa22d0798983b8/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967239546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967309107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967317540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967390804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107710141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107910037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107968019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.108244444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119482556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119770623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119883235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.120049510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.619993345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620106148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620120050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620209774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:04:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a852f09c7c372466d6eaee2bbf93a0549f278dabf6e08a4bff1ae7c770405574/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 05 18:04:33 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:04:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358121990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358264406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358298713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358445332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	13c126a54f1e3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   a852f09c7c372       busybox-7dff88458-q5j74
	655c6025b3ad3       c69fa2e9cbf5f                                                                                         6 minutes ago       Running             coredns                   0                   fc924e17f3bb7       coredns-7c65d6cfc9-cv2cc
	478b52af51d4c       c69fa2e9cbf5f                                                                                         6 minutes ago       Running             coredns                   0                   14a06ee63dae3       coredns-7c65d6cfc9-q96rw
	a696b9219e867       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   8d756c554cb18       storage-provisioner
	c15d829a94cc1       kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16              6 minutes ago       Running             kindnet-cni               0                   fc1560dd926ec       kindnet-hppzk
	1707dd1e7b710       505d571f5fd56                                                                                         6 minutes ago       Running             kube-proxy                0                   c677886629450       kube-proxy-s8xxj
	e133549e344f8       ghcr.io/kube-vip/kube-vip@sha256:1ba8e6e7fe678a8779986a6b88a1f391c63f7fe3edd34b167dceed3f66e8c87e     6 minutes ago       Running             kube-vip                  0                   c50c39a35d466       kube-vip-ha-213000
	a3c0c64a3782d       9499c9960544e                                                                                         6 minutes ago       Running             kube-apiserver            0                   c31f45140546c       kube-apiserver-ha-213000
	0ea9be13ab8cd       847c7bc1a5418                                                                                         6 minutes ago       Running             kube-scheduler            0                   e5947d7e736c7       kube-scheduler-ha-213000
	968f538b61d4e       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      0                   75b49749f37e9       etcd-ha-213000
	3abc7a0629ac1       0486b6c53a1b5                                                                                         6 minutes ago       Running             kube-controller-manager   0                   356e1160051cf       kube-controller-manager-ha-213000
	
	
	==> coredns [478b52af51d4] <==
	[INFO] 10.244.0.4:55854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091157s
	[INFO] 10.244.0.4:46292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064292s
	[INFO] 10.244.0.4:40657 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075287s
	[INFO] 10.244.0.4:40797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000047063s
	[INFO] 10.244.0.4:57944 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092384s
	[INFO] 10.244.2.2:46924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091299s
	[INFO] 10.244.2.2:58313 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000054156s
	[INFO] 10.244.2.2:60784 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097833s
	[INFO] 10.244.2.2:45453 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000050266s
	[INFO] 10.244.2.2:34445 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089937s
	[INFO] 10.244.2.2:47005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097467s
	[INFO] 10.244.1.2:50221 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057047s
	[INFO] 10.244.1.2:57677 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089203s
	[INFO] 10.244.0.4:55860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068653s
	[INFO] 10.244.2.2:43135 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074016s
	[INFO] 10.244.2.2:55939 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120434s
	[INFO] 10.244.2.2:50062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004236s
	[INFO] 10.244.1.2:47130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093036s
	[INFO] 10.244.1.2:36124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107815s
	[INFO] 10.244.1.2:47802 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.1.2:50939 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000076401s
	[INFO] 10.244.0.4:52439 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046668s
	[INFO] 10.244.0.4:59917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065899s
	[INFO] 10.244.2.2:54610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146586s
	[INFO] 10.244.2.2:44903 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045712s
	
	
	==> coredns [655c6025b3ad] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41565 - 52279 "HINFO IN 3928448342492679704.6484769811595158491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01055942s
	[INFO] 10.244.1.2:44772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001930473s
	[INFO] 10.244.1.2:55396 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.173730652s
	[INFO] 10.244.1.2:55075 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.046822424s
	[INFO] 10.244.2.2:46916 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000083419s
	[INFO] 10.244.2.2:50720 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010115s
	[INFO] 10.244.1.2:40476 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129724s
	[INFO] 10.244.1.2:38997 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096588s
	[INFO] 10.244.1.2:47386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084243s
	[INFO] 10.244.0.4:36440 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000654701s
	[INFO] 10.244.0.4:54567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087223s
	[INFO] 10.244.0.4:51050 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103169s
	[INFO] 10.244.2.2:55487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00066824s
	[INFO] 10.244.2.2:46388 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075057s
	[INFO] 10.244.1.2:44219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153172s
	[INFO] 10.244.1.2:57067 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012789s
	[INFO] 10.244.0.4:39514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159605s
	[INFO] 10.244.0.4:48601 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049327s
	[INFO] 10.244.0.4:42037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113025s
	[INFO] 10.244.2.2:54065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100908s
	[INFO] 10.244.0.4:48546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091627s
	[INFO] 10.244.0.4:58260 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121652s
	[INFO] 10.244.2.2:59084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090378s
	[INFO] 10.244.2.2:46960 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000044449s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a564c48e26a04536b809c68ac140133d
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    a364bf87-b805-465e-9b8e-7bb15a7511fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m27s
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m27s
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m38s (x7 over 6m38s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m31s                  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s                  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s                  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                6m7s                   kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:05:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe9d6fab7c594c258d6faf081338352a
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    648e1173-cbdb-42eb-9fce-79e6f778bcc4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m8s
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           3m47s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             102s                 node-controller  Node ha-213000-m02 status is now: NodeNotReady
	
	
	Name:               ha-213000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-213000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 36fce1bb5353483a8c61e47d06795490
	  System UUID:                9e834703-0000-0000-a798-2d165259ce86
	  Boot ID:                    52f0306a-86b9-41a1-bf8e-c6bebad66edd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x9hwg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 etcd-ha-213000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m52s
	  kube-system                 kindnet-trfhn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m54s
	  kube-system                 kube-apiserver-ha-213000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 kube-controller-manager-ha-213000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 kube-proxy-5ldvg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-scheduler-ha-213000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-vip-ha-213000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-213000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-213000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-213000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 9dbfab1abbaa466d920d386afdae83f4
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    7277bbeb-aa13-4ef8-b3e3-22ba82158b7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p4bx6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m58s
	  kube-system                 kube-proxy-m45pk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m59s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m59s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m59s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  RegisteredNode           2m56s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  NodeReady                2m36s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.822548] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[Nov 5 18:01] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.371107] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +0.098303] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +1.766719] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.299452] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.101798] systemd-fstab-generator[851]: Ignoring "noauto" option for root device
	[  +0.116525] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +2.427440] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.092436] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.099684] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.061433] kauditd_printk_skb: 233 callbacks suppressed
	[  +0.078398] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +3.438367] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +2.210589] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.377139] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.489712] systemd-fstab-generator[1610]: Ignoring "noauto" option for root device
	[  +1.395421] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.860785] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.083307] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.458645] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.344532] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 5 18:02] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [968f538b61d4] <==
	{"level":"warn","ts":"2024-11-05T18:07:29.773146Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:31.465300Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:31.465349Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:34.773927Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:34.774002Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:35.466979Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:35.467077Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.468800Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.468851Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.774640Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.774659Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:43.472095Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:43.472267Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:44.775772Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:44.775850Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:47.474289Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:47.474371Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:49.776765Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:49.776821Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:51.478515Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:51.478565Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:54.777722Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:54.777784Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:55.479532Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:55.479668Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	
	
	==> kernel <==
	 18:07:58 up 7 min,  0 users,  load average: 0.20, 0.33, 0.17
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c15d829a94cc] <==
	I1105 18:07:27.422022       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:37.414652       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:37.414680       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:07:37.414837       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:37.414873       1 main.go:301] handling current node
	I1105 18:07:37.414884       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:37.414910       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:37.414980       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:37.415014       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:47.420917       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:47.420936       1 main.go:301] handling current node
	I1105 18:07:47.420946       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:47.420949       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:47.421047       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:47.421051       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:47.421211       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:47.421219       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:07:57.414448       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:57.414512       1 main.go:301] handling current node
	I1105 18:07:57.414523       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:57.414528       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:57.414708       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:57.414734       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:57.414784       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:57.414811       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a3c0c64a3782] <==
	I1105 18:01:24.508797       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1105 18:01:24.512609       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1105 18:01:24.513340       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:01:24.515760       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:01:25.122836       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:01:26.576236       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:01:26.588108       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:01:26.594865       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:01:30.774314       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:01:30.832293       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:04:35.148097       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57771: use of closed network connection
	E1105 18:04:35.373995       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57773: use of closed network connection
	E1105 18:04:35.583158       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57775: use of closed network connection
	E1105 18:04:35.792755       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57777: use of closed network connection
	E1105 18:04:35.994700       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57779: use of closed network connection
	E1105 18:04:36.197292       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57781: use of closed network connection
	E1105 18:04:36.403420       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57783: use of closed network connection
	E1105 18:04:36.603032       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57785: use of closed network connection
	E1105 18:04:36.809398       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57787: use of closed network connection
	E1105 18:04:37.166464       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57790: use of closed network connection
	E1105 18:04:37.374005       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57792: use of closed network connection
	E1105 18:04:37.593134       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57794: use of closed network connection
	E1105 18:04:37.798340       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57796: use of closed network connection
	E1105 18:04:37.996915       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57798: use of closed network connection
	E1105 18:04:38.199615       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57800: use of closed network connection
	
	
	==> kube-controller-manager [3abc7a0629ac] <==
	I1105 18:04:59.206231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.206302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.622575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.926061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.639227       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-213000-m04"
	I1105 18:05:00.668381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.863718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.938911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:01.879354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000"
	I1105 18:05:01.917330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:02.023557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:04.331384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m03"
	I1105 18:05:09.430571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.686438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.687560       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-213000-m04"
	I1105 18:05:21.698524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.928429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:29.675436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:06:15.656491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-213000-m04"
	I1105 18:06:15.656514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:15.666046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:15.680720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.448781ms"
	I1105 18:06:15.680907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.823µs"
	I1105 18:06:15.906178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:20.788789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	
	
	==> kube-proxy [1707dd1e7b71] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:01:32.964306       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:01:32.975224       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:01:32.975302       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:01:33.004972       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:01:33.005019       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:01:33.005036       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:01:33.007241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:01:33.007726       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:01:33.007754       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:01:33.009040       1 config.go:199] "Starting service config controller"
	I1105 18:01:33.009388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:01:33.009596       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:01:33.009623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:01:33.010305       1 config.go:328] "Starting node config controller"
	I1105 18:01:33.010330       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:01:33.110597       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:01:33.110614       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:01:33.110623       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ea9be13ab8c] <==
	E1105 18:01:24.259190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 18:01:26.254727       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:04:03.280280       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-trfhn\": pod kindnet-trfhn is already assigned to node \"ha-213000-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-trfhn" node="ha-213000-m03"
	E1105 18:04:03.285002       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6f39544f-a014-444c-8ad7-779e1940d254(kube-system/kindnet-trfhn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-trfhn"
	E1105 18:04:03.285696       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-trfhn\": pod kindnet-trfhn is already assigned to node \"ha-213000-m03\"" pod="kube-system/kindnet-trfhn"
	I1105 18:04:03.285865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-trfhn" node="ha-213000-m03"
	I1105 18:04:31.258177       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="69f86bc8-78ea-4277-b688-fd445c4f8f6e" pod="default/busybox-7dff88458-89r49" assumedNode="ha-213000-m02" currentNode="ha-213000-m03"
	I1105 18:04:31.268574       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="3a5c2f7c-8906-4561-8875-8736f45e3fda" pod="default/busybox-7dff88458-x9hwg" assumedNode="ha-213000-m03" currentNode="ha-213000-m02"
	E1105 18:04:31.273427       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-89r49\": pod busybox-7dff88458-89r49 is already assigned to node \"ha-213000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-89r49" node="ha-213000-m03"
	E1105 18:04:31.273527       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 69f86bc8-78ea-4277-b688-fd445c4f8f6e(default/busybox-7dff88458-89r49) was assumed on ha-213000-m03 but assigned to ha-213000-m02" pod="default/busybox-7dff88458-89r49"
	E1105 18:04:31.273547       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-89r49\": pod busybox-7dff88458-89r49 is already assigned to node \"ha-213000-m02\"" pod="default/busybox-7dff88458-89r49"
	I1105 18:04:31.273836       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-89r49" node="ha-213000-m02"
	I1105 18:04:31.281777       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="7f2e1057-5c45-4255-9c8e-d1eba882f2e5" pod="default/busybox-7dff88458-q5j74" assumedNode="ha-213000" currentNode="ha-213000-m03"
	E1105 18:04:31.287338       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x9hwg\": pod busybox-7dff88458-x9hwg is already assigned to node \"ha-213000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x9hwg" node="ha-213000-m02"
	E1105 18:04:31.287388       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a5c2f7c-8906-4561-8875-8736f45e3fda(default/busybox-7dff88458-x9hwg) was assumed on ha-213000-m02 but assigned to ha-213000-m03" pod="default/busybox-7dff88458-x9hwg"
	E1105 18:04:31.287401       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x9hwg\": pod busybox-7dff88458-x9hwg is already assigned to node \"ha-213000-m03\"" pod="default/busybox-7dff88458-x9hwg"
	I1105 18:04:31.287615       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x9hwg" node="ha-213000-m03"
	E1105 18:04:31.291529       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-q5j74\": pod busybox-7dff88458-q5j74 is already assigned to node \"ha-213000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-q5j74" node="ha-213000-m03"
	E1105 18:04:31.291599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7f2e1057-5c45-4255-9c8e-d1eba882f2e5(default/busybox-7dff88458-q5j74) was assumed on ha-213000-m03 but assigned to ha-213000" pod="default/busybox-7dff88458-q5j74"
	E1105 18:04:31.291701       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-q5j74\": pod busybox-7dff88458-q5j74 is already assigned to node \"ha-213000\"" pod="default/busybox-7dff88458-q5j74"
	I1105 18:04:31.291992       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-q5j74" node="ha-213000"
	E1105 18:04:59.242744       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2bx5\": pod kube-proxy-r2bx5 is already assigned to node \"ha-213000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2bx5" node="ha-213000-m04"
	E1105 18:04:59.242812       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2bx5\": pod kube-proxy-r2bx5 is already assigned to node \"ha-213000-m04\"" pod="kube-system/kube-proxy-r2bx5"
	E1105 18:04:59.243648       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4qmgf\": pod kindnet-4qmgf is already assigned to node \"ha-213000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4qmgf" node="ha-213000-m04"
	E1105 18:04:59.243714       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4qmgf\": pod kindnet-4qmgf is already assigned to node \"ha-213000-m04\"" pod="kube-system/kindnet-4qmgf"
	
	
	==> kubelet <==
	Nov 05 18:03:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:04:27 ha-213000 kubelet[2108]: E1105 18:04:27.199259    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:04:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:04:31 ha-213000 kubelet[2108]: I1105 18:04:31.275013    2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=180.274999296 podStartE2EDuration="3m0.274999296s" podCreationTimestamp="2024-11-05 18:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-05 18:01:51.416826219 +0000 UTC m=+24.377833961" watchObservedRunningTime="2024-11-05 18:04:31.274999296 +0000 UTC m=+184.236007040"
	Nov 05 18:04:31 ha-213000 kubelet[2108]: I1105 18:04:31.374929    2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88s2k\" (UniqueName: \"kubernetes.io/projected/7f2e1057-5c45-4255-9c8e-d1eba882f2e5-kube-api-access-88s2k\") pod \"busybox-7dff88458-q5j74\" (UID: \"7f2e1057-5c45-4255-9c8e-d1eba882f2e5\") " pod="default/busybox-7dff88458-q5j74"
	Nov 05 18:04:34 ha-213000 kubelet[2108]: I1105 18:04:34.308278    2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-q5j74" podStartSLOduration=1.766756631 podStartE2EDuration="3.308264067s" podCreationTimestamp="2024-11-05 18:04:31 +0000 UTC" firstStartedPulling="2024-11-05 18:04:31.760833399 +0000 UTC m=+184.721841133" lastFinishedPulling="2024-11-05 18:04:33.302340832 +0000 UTC m=+186.263348569" observedRunningTime="2024-11-05 18:04:34.308023413 +0000 UTC m=+187.269031168" watchObservedRunningTime="2024-11-05 18:04:34.308264067 +0000 UTC m=+187.269271805"
	Nov 05 18:04:36 ha-213000 kubelet[2108]: E1105 18:04:36.603308    2108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54556->127.0.0.1:37937: write tcp 127.0.0.1:54556->127.0.0.1:37937: write: broken pipe
	Nov 05 18:05:27 ha-213000 kubelet[2108]: E1105 18:05:27.199657    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:05:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:06:27 ha-213000 kubelet[2108]: E1105 18:06:27.202597    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:06:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:07:27 ha-213000 kubelet[2108]: E1105 18:07:27.199645    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:07:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (130.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:309: expected profile "ha-213000" in json of 'profile list' to have "HAppy" status but have "Degraded" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-213000\",\"Status\":\"Degraded\",\"Config\":{\"Name\":\"ha-213000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-213000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"Moun
tIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (2.314538141s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m03_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:00:48
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:00:48.477016   19703 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:00:48.477674   19703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:48.477680   19703 out.go:358] Setting ErrFile to fd 2...
	I1105 10:00:48.477684   19703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:48.477879   19703 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:00:48.479709   19703 out.go:352] Setting JSON to false
	I1105 10:00:48.510951   19703 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7217,"bootTime":1730822431,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:00:48.511118   19703 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:00:48.569600   19703 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:00:48.610699   19703 notify.go:220] Checking for updates...
	I1105 10:00:48.634693   19703 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:00:48.698776   19703 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:00:48.753700   19703 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:00:48.775781   19703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:00:48.796789   19703 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:48.817657   19703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:00:48.839040   19703 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:00:48.871720   19703 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:00:48.913701   19703 start.go:297] selected driver: hyperkit
	I1105 10:00:48.913733   19703 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:00:48.913751   19703 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:00:48.920486   19703 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:00:48.920632   19703 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:00:48.931479   19703 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:00:48.937804   19703 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:48.937824   19703 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:00:48.937857   19703 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:00:48.938103   19703 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:00:48.938134   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:00:48.938170   19703 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1105 10:00:48.938175   19703 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 10:00:48.938248   19703 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1105 10:00:48.938346   19703 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:00:48.959836   19703 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:00:49.001660   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:00:49.001712   19703 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:00:49.001743   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:00:49.001910   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:00:49.001924   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:00:49.002321   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:00:49.002355   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json: {Name:mk69fb3d9aca0b41d8bea722484079aba6357863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:00:49.002868   19703 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:00:49.002969   19703 start.go:364] duration metric: took 85.161µs to acquireMachinesLock for "ha-213000"
	I1105 10:00:49.003007   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:00:49.003069   19703 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:00:49.024722   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:00:49.024964   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:49.025013   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:49.037332   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57614
	I1105 10:00:49.037758   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:49.038202   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:00:49.038215   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:49.038496   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:49.038622   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:00:49.038725   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:49.038849   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:00:49.038876   19703 client.go:168] LocalClient.Create starting
	I1105 10:00:49.038916   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:00:49.038980   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:00:49.038997   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:00:49.039061   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:00:49.039108   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:00:49.039119   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:00:49.039131   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:00:49.039140   19703 main.go:141] libmachine: (ha-213000) Calling .PreCreateCheck
	I1105 10:00:49.039304   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.039463   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:00:49.045805   19703 main.go:141] libmachine: Creating machine...
	I1105 10:00:49.045812   19703 main.go:141] libmachine: (ha-213000) Calling .Create
	I1105 10:00:49.045899   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.046076   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.045898   19713 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:49.046146   19703 main.go:141] libmachine: (ha-213000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:00:49.239282   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.239158   19713 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa...
	I1105 10:00:49.422633   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.422543   19713 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk...
	I1105 10:00:49.422652   19703 main.go:141] libmachine: (ha-213000) DBG | Writing magic tar header
	I1105 10:00:49.422661   19703 main.go:141] libmachine: (ha-213000) DBG | Writing SSH key tar header
	I1105 10:00:49.422947   19703 main.go:141] libmachine: (ha-213000) DBG | I1105 10:00:49.422905   19713 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000 ...
	I1105 10:00:49.801010   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.801025   19703 main.go:141] libmachine: (ha-213000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:00:49.801067   19703 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:00:49.968558   19703 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:00:49.968605   19703 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:00:49.968643   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000112720)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:00:49.968680   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000112720)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:00:49.968759   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:00:49.968801   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:00:49.968818   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:00:49.972369   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 DEBUG: hyperkit: Pid is 19716
	I1105 10:00:49.973014   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:00:49.973034   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:49.973101   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:49.974438   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:49.974449   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:49.974461   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:49.974478   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:49.974495   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:49.985017   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:49 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:00:50.043482   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:00:50.044217   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:00:50.044239   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:00:50.044246   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:00:50.044251   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:00:50.437454   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:00:50.437468   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:00:50.552096   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:00:50.552115   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:00:50.552133   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:00:50.552146   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:00:50.553015   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:00:50.553028   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:00:51.975030   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 1
	I1105 10:00:51.975047   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:51.975058   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:51.976103   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:51.976148   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:51.976166   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:51.976186   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:51.976200   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:53.977051   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 2
	I1105 10:00:53.977066   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:53.977114   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:53.978147   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:53.978190   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:53.978202   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:53.978220   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:53.978233   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:55.979026   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 3
	I1105 10:00:55.979043   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:55.979100   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:55.980034   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:55.980091   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:55.980102   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:55.980125   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:55.980137   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:56.301268   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:00:56.301301   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:00:56.301310   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:00:56.324886   19703 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:00:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:00:57.980637   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 4
	I1105 10:00:57.980652   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:57.980732   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:57.981684   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:57.981732   19703 main.go:141] libmachine: (ha-213000) DBG | Found 3 entries in /var/db/dhcpd_leases!
	I1105 10:00:57.981742   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:00:57.981749   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:00:57.981757   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:00:59.983824   19703 main.go:141] libmachine: (ha-213000) DBG | Attempt 5
	I1105 10:00:59.983847   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:59.984007   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:59.985286   19703 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:00:59.985368   19703 main.go:141] libmachine: (ha-213000) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:00:59.985384   19703 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:00:59.985396   19703 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:00:59.985402   19703 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:00:59.985465   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:00:59.986295   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:59.986452   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:00:59.986594   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:00:59.986606   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:00:59.986719   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:00:59.986810   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:00:59.988032   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:00:59.988083   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:00:59.988087   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:00:59.988112   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:00:59.988202   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:00:59.988332   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:00:59.988436   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:00:59.988528   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:00:59.988730   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:00:59.988975   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:00:59.988982   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:01:01.011155   19703 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I1105 10:01:04.073024   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:04.073037   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:01:04.073043   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.073211   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.073307   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.073401   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.073493   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.073653   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.073811   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.073819   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:01:04.133464   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:01:04.133513   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:01:04.133519   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:01:04.133529   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.133678   19703 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:01:04.133689   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.133791   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.133872   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.133967   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.134069   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.134170   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.134305   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.134436   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.134444   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:01:04.206864   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:01:04.206883   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.207030   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.207140   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.207234   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.207324   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.207509   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.207699   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.207711   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:01:04.275310   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:04.275329   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:01:04.275346   19703 buildroot.go:174] setting up certificates
	I1105 10:01:04.275367   19703 provision.go:84] configureAuth start
	I1105 10:01:04.275378   19703 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:01:04.275523   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:04.275627   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.275736   19703 provision.go:143] copyHostCerts
	I1105 10:01:04.275773   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:04.275854   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:01:04.275861   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:04.276002   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:01:04.276234   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:04.276283   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:01:04.276287   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:04.276380   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:01:04.276579   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:04.276626   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:01:04.276631   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:04.276750   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:01:04.276914   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:01:04.409758   19703 provision.go:177] copyRemoteCerts
	I1105 10:01:04.409836   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:01:04.409852   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.410004   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.410102   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.410207   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.410308   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:04.447116   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:01:04.447193   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:01:04.466891   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:01:04.466954   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1105 10:01:04.486228   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:01:04.486290   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:01:04.506082   19703 provision.go:87] duration metric: took 230.693486ms to configureAuth
	I1105 10:01:04.506098   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:01:04.506258   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:04.506272   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:04.506412   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.506508   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.506593   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.506676   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.506765   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.506897   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.507032   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.507040   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:01:04.567965   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:01:04.567978   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:01:04.568060   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:01:04.568074   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.568219   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.568335   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.568441   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.568552   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.568731   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.568876   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.568928   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:01:04.639803   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:01:04.639825   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:04.639961   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:04.640058   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.640141   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:04.640255   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:04.640420   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:04.640549   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:04.640561   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:01:06.214895   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:01:06.214911   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:01:06.214917   19703 main.go:141] libmachine: (ha-213000) Calling .GetURL
	I1105 10:01:06.215063   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:01:06.215071   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:01:06.215076   19703 client.go:171] duration metric: took 17.176350291s to LocalClient.Create
	I1105 10:01:06.215089   19703 start.go:167] duration metric: took 17.176396472s to libmachine.API.Create "ha-213000"
	I1105 10:01:06.215099   19703 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:01:06.215106   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:01:06.215116   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.215261   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:01:06.215274   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.215361   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.215442   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.215528   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.215620   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.251640   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:01:06.255113   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:01:06.255129   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:01:06.255230   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:01:06.255446   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:01:06.255453   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:01:06.255711   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:01:06.263216   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:06.283684   19703 start.go:296] duration metric: took 68.576557ms for postStartSetup
	I1105 10:01:06.283726   19703 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:01:06.284405   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:06.284548   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:06.284926   19703 start.go:128] duration metric: took 17.282000829s to createHost
	I1105 10:01:06.284941   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.285030   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.285125   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.285202   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.285269   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.285398   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:06.285521   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:01:06.285528   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:01:06.344331   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829665.881654888
	
	I1105 10:01:06.344343   19703 fix.go:216] guest clock: 1730829665.881654888
	I1105 10:01:06.344347   19703 fix.go:229] Guest: 2024-11-05 10:01:05.881654888 -0800 PST Remote: 2024-11-05 10:01:06.284934 -0800 PST m=+17.850547767 (delta=-403.279112ms)
	I1105 10:01:06.344367   19703 fix.go:200] guest clock delta is within tolerance: -403.279112ms
	I1105 10:01:06.344370   19703 start.go:83] releasing machines lock for "ha-213000", held for 17.341551607s
	I1105 10:01:06.344388   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.344528   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:06.344623   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.344951   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.345054   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:06.345149   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:01:06.345178   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.345208   19703 ssh_runner.go:195] Run: cat /version.json
	I1105 10:01:06.345219   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:06.345270   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.345332   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:06.345365   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.345442   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:06.345461   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.345518   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:06.345559   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.345639   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:06.378204   19703 ssh_runner.go:195] Run: systemctl --version
	I1105 10:01:06.426915   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:01:06.431535   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:01:06.431591   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:01:06.445898   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:01:06.445913   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:06.446023   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:06.460899   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:01:06.469852   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:01:06.478814   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:01:06.478874   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:01:06.487613   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:06.496557   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:01:06.505258   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:06.514169   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:01:06.524040   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:01:06.533030   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:01:06.541790   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:01:06.550841   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:01:06.558861   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:01:06.558919   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:01:06.568040   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:01:06.576174   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:06.680889   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:01:06.699975   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:06.700071   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:01:06.713715   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:06.724704   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:01:06.743034   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:06.753151   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:06.764276   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:01:06.804304   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:06.815447   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:06.838920   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:01:06.842715   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:01:06.857786   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:01:06.875540   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:01:06.983809   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:01:07.086590   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:01:07.086669   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:01:07.101565   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:07.202392   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:09.490529   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.288138695s)
	I1105 10:01:09.490615   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:01:09.502437   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:01:09.516436   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:09.526819   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:01:09.622839   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:01:09.716251   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:09.826522   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:01:09.839888   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:09.850796   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:09.959403   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:01:10.017340   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:01:10.017457   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:01:10.021721   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:01:10.021786   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:01:10.024691   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:01:10.049837   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:01:10.049922   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:10.066022   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:10.125079   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:01:10.125132   19703 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:01:10.125605   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:01:10.129273   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:10.139143   19703 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:01:10.139212   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:01:10.139280   19703 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:01:10.150154   19703 docker.go:689] Got preloaded images: 
	I1105 10:01:10.150166   19703 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.2 wasn't preloaded
	I1105 10:01:10.150233   19703 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1105 10:01:10.157824   19703 ssh_runner.go:195] Run: which lz4
	I1105 10:01:10.160636   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1105 10:01:10.160780   19703 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 10:01:10.163841   19703 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 10:01:10.163861   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (343199686 bytes)
	I1105 10:01:11.159350   19703 docker.go:653] duration metric: took 998.641869ms to copy over tarball
	I1105 10:01:11.159432   19703 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 10:01:13.249323   19703 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089892673s)
	I1105 10:01:13.249340   19703 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1105 10:01:13.274325   19703 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1105 10:01:13.282463   19703 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1105 10:01:13.296334   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:13.388712   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:15.739164   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.350454399s)
	I1105 10:01:15.739273   19703 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:01:15.754024   19703 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1105 10:01:15.754043   19703 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:01:15.754049   19703 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:01:15.754140   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:01:15.754227   19703 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:01:15.788737   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:01:15.788757   19703 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 10:01:15.788770   19703 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:01:15.788787   19703 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:01:15.788861   19703 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:01:15.788877   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:01:15.788942   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:01:15.801675   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:01:15.801751   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:01:15.801824   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:15.809490   19703 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:01:15.809553   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:01:15.816819   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:01:15.831209   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:01:15.844621   19703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:01:15.857998   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I1105 10:01:15.871169   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:01:15.874131   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:15.883385   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:15.976109   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:01:15.992221   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:01:15.992233   19703 certs.go:194] generating shared ca certs ...
	I1105 10:01:15.992243   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:15.992461   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:01:15.992552   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:01:15.992562   19703 certs.go:256] generating profile certs ...
	I1105 10:01:15.992612   19703 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:01:15.992624   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt with IP's: []
	I1105 10:01:16.094282   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt ...
	I1105 10:01:16.094299   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt: {Name:mk32df45c928182ea5273921e15df540dba3284b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.094649   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key ...
	I1105 10:01:16.094656   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key: {Name:mk4ba8eb16cdbfaf693d3586557970b225775c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.094907   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3
	I1105 10:01:16.094921   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.254]
	I1105 10:01:16.166905   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 ...
	I1105 10:01:16.166920   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3: {Name:mk8e48df26de9447c3326b40118c66ea248d3cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.167265   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3 ...
	I1105 10:01:16.167275   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3: {Name:mkb555a3da1a71d498a5e7f44da4ed0baf461c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.167543   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.0c0b88a3 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:01:16.167743   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.0c0b88a3 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:01:16.167942   19703 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:01:16.167958   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt with IP's: []
	I1105 10:01:16.340393   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt ...
	I1105 10:01:16.340414   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt: {Name:mkad63aa252d0a246c051641017bfdd8bd78fbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.340763   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key ...
	I1105 10:01:16.340771   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key: {Name:mkc1a14cacaacc53921fd9d706ec801444580291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:16.341021   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:01:16.341051   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:01:16.341070   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:01:16.341091   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:01:16.341110   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:01:16.341129   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:01:16.341149   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:01:16.341171   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:01:16.341276   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:01:16.341338   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:01:16.341346   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:01:16.341376   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:01:16.341409   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:01:16.341438   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:01:16.341499   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:16.341533   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.341553   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.341577   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.342013   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:01:16.361630   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:01:16.380740   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:01:16.400614   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:01:16.420038   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1105 10:01:16.439653   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:01:16.458562   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:01:16.478643   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:01:16.497792   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:01:16.516678   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:01:16.535739   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:01:16.555130   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:01:16.569073   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:01:16.573341   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:01:16.582782   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.586227   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.586277   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:01:16.590528   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:01:16.599704   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:01:16.608870   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.612245   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.612298   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:01:16.616513   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:01:16.625608   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:01:16.635771   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.639310   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.639358   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:16.643770   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:01:16.654663   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:01:16.660794   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:01:16.660842   19703 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:01:16.660953   19703 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:01:16.677060   19703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:01:16.690427   19703 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 10:01:16.700859   19703 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 10:01:16.709261   19703 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 10:01:16.709272   19703 kubeadm.go:157] found existing configuration files:
	
	I1105 10:01:16.709351   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 10:01:16.718113   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 10:01:16.718192   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 10:01:16.726411   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 10:01:16.734224   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 10:01:16.734289   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 10:01:16.742733   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 10:01:16.750784   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 10:01:16.750844   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 10:01:16.759076   19703 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 10:01:16.766845   19703 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 10:01:16.766909   19703 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 10:01:16.774996   19703 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1105 10:01:16.840437   19703 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 10:01:16.840491   19703 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 10:01:16.926763   19703 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 10:01:16.926877   19703 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 10:01:16.926980   19703 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 10:01:16.936091   19703 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 10:01:16.983362   19703 out.go:235]   - Generating certificates and keys ...
	I1105 10:01:16.983421   19703 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 10:01:16.983471   19703 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 10:01:17.072797   19703 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 10:01:17.179588   19703 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 10:01:17.306014   19703 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 10:01:17.631639   19703 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 10:01:17.770167   19703 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 10:01:17.770365   19703 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-213000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1105 10:01:18.036090   19703 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 10:01:18.036251   19703 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-213000 localhost] and IPs [192.169.0.5 127.0.0.1 ::1]
	I1105 10:01:18.099648   19703 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 10:01:18.290329   19703 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 10:01:18.487625   19703 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 10:01:18.487812   19703 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 10:01:18.631478   19703 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 10:01:18.780093   19703 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 10:01:18.888960   19703 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 10:01:19.168437   19703 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 10:01:19.347823   19703 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 10:01:19.348317   19703 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 10:01:19.350236   19703 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 10:01:19.371622   19703 out.go:235]   - Booting up control plane ...
	I1105 10:01:19.371724   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 10:01:19.371803   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 10:01:19.371856   19703 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 10:01:19.371944   19703 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 10:01:19.372021   19703 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 10:01:19.372058   19703 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 10:01:19.481087   19703 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 10:01:19.481190   19703 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 10:01:20.488429   19703 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007994623s
	I1105 10:01:20.488531   19703 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 10:01:26.203663   19703 kubeadm.go:310] [api-check] The API server is healthy after 5.719526197s
	I1105 10:01:26.212624   19703 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 10:01:26.220645   19703 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 10:01:26.233694   19703 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 10:01:26.233859   19703 kubeadm.go:310] [mark-control-plane] Marking the node ha-213000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 10:01:26.244246   19703 kubeadm.go:310] [bootstrap-token] Using token: w4nohd.4e3143tllv8ohc8g
	I1105 10:01:26.284768   19703 out.go:235]   - Configuring RBAC rules ...
	I1105 10:01:26.284885   19703 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 10:01:26.286787   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 10:01:26.310075   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 10:01:26.312761   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 10:01:26.318937   19703 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 10:01:26.322239   19703 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 10:01:26.608210   19703 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 10:01:27.037009   19703 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 10:01:27.610360   19703 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 10:01:27.611067   19703 kubeadm.go:310] 
	I1105 10:01:27.611117   19703 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 10:01:27.611123   19703 kubeadm.go:310] 
	I1105 10:01:27.611199   19703 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 10:01:27.611208   19703 kubeadm.go:310] 
	I1105 10:01:27.611229   19703 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 10:01:27.611277   19703 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 10:01:27.611341   19703 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 10:01:27.611352   19703 kubeadm.go:310] 
	I1105 10:01:27.611397   19703 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 10:01:27.611403   19703 kubeadm.go:310] 
	I1105 10:01:27.611451   19703 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 10:01:27.611459   19703 kubeadm.go:310] 
	I1105 10:01:27.611495   19703 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 10:01:27.611550   19703 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 10:01:27.611623   19703 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 10:01:27.611630   19703 kubeadm.go:310] 
	I1105 10:01:27.611697   19703 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 10:01:27.611766   19703 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 10:01:27.611773   19703 kubeadm.go:310] 
	I1105 10:01:27.611836   19703 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w4nohd.4e3143tllv8ohc8g \
	I1105 10:01:27.611921   19703 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 \
	I1105 10:01:27.611942   19703 kubeadm.go:310] 	--control-plane 
	I1105 10:01:27.611949   19703 kubeadm.go:310] 
	I1105 10:01:27.612027   19703 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 10:01:27.612038   19703 kubeadm.go:310] 
	I1105 10:01:27.612109   19703 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w4nohd.4e3143tllv8ohc8g \
	I1105 10:01:27.612190   19703 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 
	I1105 10:01:27.612839   19703 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 10:01:27.612851   19703 cni.go:84] Creating CNI manager for ""
	I1105 10:01:27.612855   19703 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1105 10:01:27.638912   19703 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 10:01:27.682614   19703 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 10:01:27.687942   19703 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 10:01:27.687953   19703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 10:01:27.701992   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 10:01:27.936771   19703 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 10:01:27.936836   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:27.936838   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000 minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=true
	I1105 10:01:28.117503   19703 ops.go:34] apiserver oom_adj: -16
	I1105 10:01:28.117657   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:28.618627   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:29.117808   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:29.617729   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:30.119155   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:30.618084   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:31.118505   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 10:01:31.195673   19703 kubeadm.go:1113] duration metric: took 3.258930438s to wait for elevateKubeSystemPrivileges
	I1105 10:01:31.195694   19703 kubeadm.go:394] duration metric: took 14.534988132s to StartCluster
	I1105 10:01:31.195710   19703 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:31.195820   19703 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:01:31.196307   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:31.196590   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 10:01:31.196592   19703 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:31.196603   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:01:31.196618   19703 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:01:31.196671   19703 addons.go:69] Setting storage-provisioner=true in profile "ha-213000"
	I1105 10:01:31.196685   19703 addons.go:234] Setting addon storage-provisioner=true in "ha-213000"
	I1105 10:01:31.196691   19703 addons.go:69] Setting default-storageclass=true in profile "ha-213000"
	I1105 10:01:31.196703   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:31.196707   19703 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-213000"
	I1105 10:01:31.196741   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:31.196976   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.196986   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.196996   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.197000   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.208908   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57639
	I1105 10:01:31.209261   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.209642   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.209655   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.209868   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57641
	I1105 10:01:31.209885   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.210032   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.210143   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.210244   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.210251   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.210574   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.210584   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.210788   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.211192   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.211225   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.212394   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:01:31.213752   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:01:31.214400   19703 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:01:31.214537   19703 addons.go:234] Setting addon default-storageclass=true in "ha-213000"
	I1105 10:01:31.214564   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:31.214803   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.214828   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.223254   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57643
	I1105 10:01:31.223597   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.224001   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.224022   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.224270   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.224394   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.224509   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.224581   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.225831   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:31.226397   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57645
	I1105 10:01:31.226753   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.227096   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.227107   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.227355   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.227741   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:31.227767   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:31.238983   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57647
	I1105 10:01:31.239279   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:31.239639   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:31.239659   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:31.239882   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:31.239983   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:31.240069   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:31.240135   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:31.241282   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:31.241435   19703 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 10:01:31.241450   19703 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 10:01:31.241460   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:31.241543   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:31.241623   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:31.241696   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:31.241776   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:31.250526   19703 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 10:01:31.270056   19703 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 10:01:31.270068   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 10:01:31.270085   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:31.270249   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:31.270368   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:31.270493   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:31.270593   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:31.343009   19703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 10:01:31.358734   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 10:01:31.372889   19703 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 10:01:31.583824   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.583836   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.584072   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.584088   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.584097   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.584114   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.584120   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.584249   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.584257   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.584263   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.584311   19703 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:01:31.584343   19703 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:01:31.584428   19703 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1105 10:01:31.584433   19703 round_trippers.go:469] Request Headers:
	I1105 10:01:31.584440   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:01:31.584445   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:01:31.589847   19703 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:01:31.590273   19703 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1105 10:01:31.590280   19703 round_trippers.go:469] Request Headers:
	I1105 10:01:31.590285   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:01:31.590289   19703 round_trippers.go:473]     Content-Type: application/json
	I1105 10:01:31.590292   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:01:31.591793   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:01:31.591915   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.591923   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.592075   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.592084   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.592098   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.661628   19703 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I1105 10:01:31.799548   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.799567   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.799772   19703 main.go:141] libmachine: (ha-213000) DBG | Closing plugin on server side
	I1105 10:01:31.799790   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.799800   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.799817   19703 main.go:141] libmachine: Making call to close driver server
	I1105 10:01:31.799822   19703 main.go:141] libmachine: (ha-213000) Calling .Close
	I1105 10:01:31.799950   19703 main.go:141] libmachine: Successfully made call to close driver server
	I1105 10:01:31.799959   19703 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 10:01:31.823619   19703 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1105 10:01:31.881388   19703 addons.go:510] duration metric: took 684.78194ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1105 10:01:31.881432   19703 start.go:246] waiting for cluster config update ...
	I1105 10:01:31.881446   19703 start.go:255] writing updated cluster config ...
	I1105 10:01:31.902486   19703 out.go:201] 
	I1105 10:01:31.940014   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:31.940131   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:31.962472   19703 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:01:32.004496   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:01:32.004517   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:01:32.004642   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:01:32.004651   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:01:32.004703   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:32.005148   19703 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:01:32.005220   19703 start.go:364] duration metric: took 59.105µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:01:32.005235   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:32.005275   19703 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I1105 10:01:32.026387   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:01:32.026549   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:32.026581   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:32.038441   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57652
	I1105 10:01:32.038798   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:32.039196   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:32.039218   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:32.039447   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:32.039560   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:32.039666   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:32.039774   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:01:32.039792   19703 client.go:168] LocalClient.Create starting
	I1105 10:01:32.039824   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:01:32.039866   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:01:32.039878   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:01:32.039917   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:01:32.039950   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:01:32.039959   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:01:32.039978   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:01:32.039982   19703 main.go:141] libmachine: (ha-213000-m02) Calling .PreCreateCheck
	I1105 10:01:32.040065   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.040093   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:32.047652   19703 main.go:141] libmachine: Creating machine...
	I1105 10:01:32.047661   19703 main.go:141] libmachine: (ha-213000-m02) Calling .Create
	I1105 10:01:32.047736   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.047898   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.047732   19737 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:01:32.047955   19703 main.go:141] libmachine: (ha-213000-m02) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:01:32.258405   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.258328   19737 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa...
	I1105 10:01:32.370475   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.370420   19737 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk...
	I1105 10:01:32.370496   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Writing magic tar header
	I1105 10:01:32.370504   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Writing SSH key tar header
	I1105 10:01:32.371373   19703 main.go:141] libmachine: (ha-213000-m02) DBG | I1105 10:01:32.371253   19737 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02 ...
	I1105 10:01:32.760483   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.760499   19703 main.go:141] libmachine: (ha-213000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:01:32.760532   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:01:32.785150   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:01:32.785168   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:01:32.785208   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:01:32.785232   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:01:32.785286   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:01:32.785316   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:01:32.785326   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:01:32.788392   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 DEBUG: hyperkit: Pid is 19738
	I1105 10:01:32.789760   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:01:32.789776   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:32.789838   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:32.790923   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:32.791036   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:32.791047   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:32.791055   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:32.791063   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:32.791071   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:32.799256   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:01:32.810076   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:01:32.811011   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:01:32.811039   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:01:32.811065   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:01:32.811083   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:01:33.216124   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:01:33.216141   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:01:33.331141   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:01:33.331187   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:01:33.331200   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:01:33.331210   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:01:33.331930   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:01:33.331952   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:01:34.791292   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 1
	I1105 10:01:34.791308   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:34.791415   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:34.792404   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:34.792463   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:34.792476   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:34.792486   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:34.792493   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:34.792500   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:36.794004   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 2
	I1105 10:01:36.794019   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:36.794104   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:36.795044   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:36.795099   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:36.795107   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:36.795115   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:36.795123   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:36.795143   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:38.796117   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 3
	I1105 10:01:38.796134   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:38.796192   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:38.797137   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:38.797198   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:38.797207   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:38.797215   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:38.797220   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:38.797228   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:39.085812   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:01:39.085887   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:01:39.085896   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:01:39.108556   19703 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:01:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:01:40.797630   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 4
	I1105 10:01:40.797646   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:40.797725   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:40.798681   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:40.798749   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 4 entries in /var/db/dhcpd_leases!
	I1105 10:01:40.798757   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:01:40.798766   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:01:40.798773   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:01:40.798785   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:01:42.800804   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 5
	I1105 10:01:42.800819   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:42.800888   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:42.801843   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:01:42.801914   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:01:42.801923   19703 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:01:42.801933   19703 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:01:42.801939   19703 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:01:42.802006   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:42.802642   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:42.802744   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:42.802850   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:01:42.802857   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:01:42.802937   19703 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:42.802999   19703 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:01:42.803924   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:01:42.803931   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:01:42.803935   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:01:42.803939   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:42.804024   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:42.804111   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:42.804205   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:42.804300   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:42.804436   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:42.804615   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:42.804623   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:01:43.860176   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:43.860188   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:01:43.860194   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.860339   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.860450   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.860549   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.860635   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.860782   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.860934   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.860943   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:01:43.918908   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:01:43.918939   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:01:43.918944   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:01:43.918953   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:43.919089   19703 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:01:43.919101   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:43.919200   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.919297   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.919385   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.919473   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.919562   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.919750   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.919884   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.919892   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:01:43.986937   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:01:43.986952   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:43.987088   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:43.987192   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.987282   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:43.987385   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:43.987525   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:43.987656   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:43.987668   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:01:44.049824   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:01:44.049837   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:01:44.049852   19703 buildroot.go:174] setting up certificates
	I1105 10:01:44.049859   19703 provision.go:84] configureAuth start
	I1105 10:01:44.049865   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:01:44.050000   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:44.050104   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.050210   19703 provision.go:143] copyHostCerts
	I1105 10:01:44.050243   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:44.050287   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:01:44.050293   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:01:44.050418   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:01:44.050628   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:44.050658   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:01:44.050663   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:01:44.050731   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:01:44.050902   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:44.050930   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:01:44.050935   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:01:44.050999   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:01:44.051159   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:01:44.155430   19703 provision.go:177] copyRemoteCerts
	I1105 10:01:44.155494   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:01:44.155508   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.155652   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.155761   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.155855   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.155960   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:44.190390   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:01:44.190459   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:01:44.209956   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:01:44.210020   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:01:44.229611   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:01:44.229678   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:01:44.249732   19703 provision.go:87] duration metric: took 199.867169ms to configureAuth
	I1105 10:01:44.249751   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:01:44.249884   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:44.249897   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:44.250035   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.250145   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.250227   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.250309   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.250384   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.250517   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.250642   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.250651   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:01:44.307473   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:01:44.307484   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:01:44.307570   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:01:44.307582   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.307713   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.307800   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.307896   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.307984   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.308146   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.308285   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.308329   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:01:44.374560   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:01:44.374579   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:44.374715   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:44.374811   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.374905   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:44.374997   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:44.375155   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:44.375292   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:44.375306   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:01:45.916909   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:01:45.916923   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:01:45.916928   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetURL
	I1105 10:01:45.917079   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:01:45.917088   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:01:45.917094   19703 client.go:171] duration metric: took 13.877421847s to LocalClient.Create
	I1105 10:01:45.917107   19703 start.go:167] duration metric: took 13.877464427s to libmachine.API.Create "ha-213000"
	I1105 10:01:45.917113   19703 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:01:45.917119   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:01:45.917129   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:45.917290   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:01:45.917304   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:45.917390   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:45.917474   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:45.917556   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:45.917651   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:45.954621   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:01:45.963284   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:01:45.963298   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:01:45.963394   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:01:45.963534   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:01:45.963541   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:01:45.963709   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:01:45.974744   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:46.007617   19703 start.go:296] duration metric: took 90.496072ms for postStartSetup
	I1105 10:01:46.007644   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:01:46.008278   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:46.008431   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:01:46.008809   19703 start.go:128] duration metric: took 14.00365458s to createHost
	I1105 10:01:46.008826   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:46.008921   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.009026   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.009114   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.009199   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.009324   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:01:46.009442   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:01:46.009449   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:01:46.065399   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829706.339878187
	
	I1105 10:01:46.065410   19703 fix.go:216] guest clock: 1730829706.339878187
	I1105 10:01:46.065415   19703 fix.go:229] Guest: 2024-11-05 10:01:46.339878187 -0800 PST Remote: 2024-11-05 10:01:46.00882 -0800 PST m=+57.574793708 (delta=331.058187ms)
	I1105 10:01:46.065424   19703 fix.go:200] guest clock delta is within tolerance: 331.058187ms
	I1105 10:01:46.065428   19703 start.go:83] releasing machines lock for "ha-213000-m02", held for 14.060329703s
	I1105 10:01:46.065445   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.065576   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:46.087717   19703 out.go:177] * Found network options:
	I1105 10:01:46.109842   19703 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:01:46.131999   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:01:46.132065   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.132924   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.133189   19703 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:01:46.133350   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:01:46.133419   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:01:46.133425   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:01:46.133511   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:01:46.133525   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:01:46.133564   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.133658   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:01:46.133724   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.133792   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:01:46.133856   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.133913   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:01:46.133985   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:01:46.134067   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:01:46.166296   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:01:46.166372   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:01:46.210783   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:01:46.210798   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:46.210864   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:46.225606   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:01:46.234567   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:01:46.243434   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:01:46.243498   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:01:46.252254   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:46.260991   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:01:46.269783   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:01:46.278460   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:01:46.287315   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:01:46.296362   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:01:46.305259   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:01:46.314314   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:01:46.322151   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:01:46.322203   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:01:46.331333   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:01:46.339411   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:46.437814   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:01:46.456976   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:01:46.457074   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:01:46.473512   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:46.487971   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:01:46.501912   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:01:46.512646   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:46.523147   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:01:46.545158   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:01:46.555335   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:01:46.570377   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:01:46.573322   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:01:46.580455   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:01:46.594087   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:01:46.688786   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:01:46.806047   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:01:46.806077   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:01:46.821570   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:46.919986   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:01:49.283369   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.363383588s)
	I1105 10:01:49.283454   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:01:49.293731   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:01:49.306548   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:49.317994   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:01:49.421101   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:01:49.523439   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:49.641875   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:01:49.655594   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:01:49.667711   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:49.787298   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:01:49.845991   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:01:49.846096   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:01:49.851066   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:01:49.851131   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:01:49.854437   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:01:49.883943   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:01:49.884034   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:49.900385   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:01:49.958496   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:01:50.015373   19703 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:01:50.036835   19703 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:01:50.037289   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:01:50.041454   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:50.051908   19703 mustload.go:65] Loading cluster: ha-213000
	I1105 10:01:50.052063   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:01:50.052290   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:50.052318   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:50.063943   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57675
	I1105 10:01:50.064254   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:50.064622   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:50.064639   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:50.064857   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:50.064943   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:01:50.065040   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:01:50.065101   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:01:50.066239   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:50.066502   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:50.066538   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:50.077511   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57677
	I1105 10:01:50.077820   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:50.078153   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:50.078165   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:50.078378   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:50.078491   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:50.078597   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:01:50.078603   19703 certs.go:194] generating shared ca certs ...
	I1105 10:01:50.078614   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.078762   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:01:50.078814   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:01:50.078823   19703 certs.go:256] generating profile certs ...
	I1105 10:01:50.078932   19703 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:01:50.078952   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:01:50.078965   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:01:50.259675   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 ...
	I1105 10:01:50.259696   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614: {Name:mk88a6c605d32cdc699192a3b9f65c36d4d8999e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.260061   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614 ...
	I1105 10:01:50.260070   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614: {Name:mk09cfa8a7c58367d4fd503cdc6b46cb11ab646e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:01:50.260325   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.72f96614 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:01:50.260527   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:01:50.260749   19703 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:01:50.260759   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:01:50.260781   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:01:50.260800   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:01:50.260819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:01:50.260838   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:01:50.260856   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:01:50.260874   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:01:50.260893   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:01:50.260970   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:01:50.261007   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:01:50.261015   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:01:50.261049   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:01:50.261078   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:01:50.261106   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:01:50.261167   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:01:50.261198   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.261219   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.261239   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.261271   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:50.261425   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:50.261537   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:50.261637   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:50.261724   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:50.291214   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:01:50.295029   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:01:50.304845   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:01:50.308184   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:01:50.316387   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:01:50.319596   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:01:50.337836   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:01:50.342163   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:01:50.351433   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:01:50.354494   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:01:50.362620   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:01:50.365669   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:01:50.373430   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:01:50.393167   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:01:50.412948   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:01:50.433246   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:01:50.454659   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 10:01:50.476244   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:01:50.497382   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:01:50.518008   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:01:50.539746   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:01:50.559947   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:01:50.580509   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:01:50.600953   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:01:50.615539   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:01:50.631388   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:01:50.646013   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:01:50.660722   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:01:50.675559   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:01:50.690207   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:01:50.705146   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:01:50.709852   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:01:50.720235   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.724017   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.724092   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:01:50.728732   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:01:50.738807   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:01:50.748691   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.752394   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.752467   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:01:50.757374   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:01:50.767503   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:01:50.777410   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.781104   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.781174   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:01:50.785786   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:01:50.795583   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:01:50.798943   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:01:50.798984   19703 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:01:50.799037   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:01:50.799054   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:01:50.799124   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:01:50.814293   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:01:50.814341   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:01:50.814416   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:50.823146   19703 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 10:01:50.823227   19703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 10:01:50.832606   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl
	I1105 10:01:50.832611   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1105 10:01:50.832606   19703 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1105 10:01:53.130716   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:01:53.131358   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:01:53.135037   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 10:01:53.135071   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 10:01:53.503819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:01:53.503977   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:01:53.507746   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 10:01:53.507776   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 10:01:54.433884   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:01:54.445780   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:01:54.449919   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:01:54.453259   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 10:01:54.453278   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 10:01:54.698228   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:01:54.705621   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:01:54.719140   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:01:54.732989   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:01:54.747064   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:01:54.750001   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:01:54.764611   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:01:54.865217   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:01:54.881672   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:01:54.881968   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:01:54.881993   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:01:54.911777   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57704
	I1105 10:01:54.912101   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:01:54.912484   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:01:54.912500   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:01:54.912742   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:01:54.912836   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:01:54.912927   19703 start.go:317] joinCluster: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:01:54.913009   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 10:01:54.913021   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:01:54.913107   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:01:54.913205   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:01:54.913312   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:01:54.913396   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:01:55.045976   19703 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:01:55.046004   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5r3zgt.rf02xhd5n0rx0515 --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443"
	I1105 10:02:51.053865   19703 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5r3zgt.rf02xhd5n0rx0515 --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m02 --control-plane --apiserver-advertise-address=192.169.0.6 --apiserver-bind-port=8443": (56.008348234s)
	I1105 10:02:51.053890   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 10:02:51.498162   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000-m02 minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=false
	I1105 10:02:51.582922   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-213000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 10:02:51.686566   19703 start.go:319] duration metric: took 56.774150217s to joinCluster
	I1105 10:02:51.686607   19703 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:02:51.686840   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:02:51.711458   19703 out.go:177] * Verifying Kubernetes components...
	I1105 10:02:51.753270   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:02:52.031063   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:02:52.044378   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:02:52.044645   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:02:52.044691   19703 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:02:52.044863   19703 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:02:52.044923   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:52.044928   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:52.044934   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:52.044938   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:52.058081   19703 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 10:02:52.545656   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:52.545674   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:52.545681   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:52.545696   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:52.555893   19703 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 10:02:53.045024   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:53.045046   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:53.045052   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:53.045055   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:53.048645   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:02:53.546069   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:53.546084   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:53.546091   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:53.546093   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:53.547996   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:54.045237   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:54.045257   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:54.045267   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:54.045272   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:54.047820   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:54.048291   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:54.545760   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:54.545775   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:54.545782   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:54.545785   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:54.547819   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:55.044978   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:55.044993   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:55.045001   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:55.045004   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:55.046984   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:55.545354   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:55.545368   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:55.545375   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:55.545378   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:55.548038   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.045218   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:56.045245   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:56.045253   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:56.045268   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:56.047334   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.544996   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:56.545011   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:56.545018   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:56.545021   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:56.547178   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:56.547527   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:57.045514   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:57.045534   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:57.045542   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:57.045547   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:57.047484   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:02:57.544988   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:57.545015   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:57.545024   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:57.545051   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:57.547803   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.045000   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:58.045015   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:58.045024   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:58.045028   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:58.047832   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.546341   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:58.546364   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:58.546375   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:58.546382   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:58.549120   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:58.549600   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:02:59.046258   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:59.046274   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:59.046280   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:59.046284   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:59.048819   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:02:59.545918   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:02:59.545955   19703 round_trippers.go:469] Request Headers:
	I1105 10:02:59.545965   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:02:59.545970   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:02:59.548272   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:00.046889   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:00.046941   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:00.046952   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:00.046957   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:00.049445   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:00.545645   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:00.545688   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:00.545698   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:00.545703   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:00.547837   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:01.045437   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:01.045458   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:01.045487   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:01.045493   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:01.048085   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:01.048344   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:01.545645   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:01.545659   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:01.545667   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:01.545671   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:01.547691   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:02.045598   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:02.045646   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:02.045658   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:02.045664   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:02.048807   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:02.546565   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:02.546592   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:02.546604   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:02.546612   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:02.549476   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:03.045855   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:03.045870   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:03.045879   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:03.045884   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:03.048183   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:03.048643   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:03.545655   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:03.545672   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:03.545680   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:03.545685   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:03.548253   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:04.045787   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:04.045799   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:04.045804   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:04.045808   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:04.047953   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:04.545151   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:04.545177   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:04.545189   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:04.545194   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:04.548344   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:05.045476   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:05.045491   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:05.045499   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:05.045505   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:05.047869   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:05.545655   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:05.545683   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:05.545690   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:05.545695   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:05.547764   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:05.548130   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:06.044875   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:06.044918   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:06.044928   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:06.044935   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:06.048311   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:06.544896   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:06.544908   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:06.544914   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:06.544918   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:06.547153   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:07.046736   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:07.046760   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:07.046771   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:07.046776   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:07.053742   19703 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:03:07.545035   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:07.545055   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:07.545065   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:07.545072   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:07.548090   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:07.548456   19703 node_ready.go:53] node "ha-213000-m02" has status "Ready":"False"
	I1105 10:03:08.045232   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:08.045266   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:08.045277   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:08.045283   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:08.047484   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:08.546527   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:08.546544   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:08.546553   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:08.546557   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:08.549226   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:09.044874   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:09.044886   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:09.044892   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:09.044895   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:09.047137   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:09.544845   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:09.544883   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:09.544894   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:09.544900   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:09.547114   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.045255   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.045274   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.045282   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.045287   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.047408   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.047748   19703 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:03:10.047761   19703 node_ready.go:38] duration metric: took 18.003041287s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:03:10.047767   19703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:03:10.047809   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:10.047815   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.047821   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.047824   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.050396   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.054843   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.054888   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:03:10.054893   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.054898   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.054902   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.056627   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.057017   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.057024   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.057030   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.057034   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.058541   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.058972   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.058981   19703 pod_ready.go:82] duration metric: took 4.12715ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.058987   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.059026   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:03:10.059031   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.059036   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.059040   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.060406   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.060936   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.060944   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.060949   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.060952   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.062259   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.062752   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.062760   19703 pod_ready.go:82] duration metric: took 3.768625ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.062766   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.062794   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:03:10.062799   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.062804   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.062808   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.064381   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.064737   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.064744   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.064749   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.064753   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.066188   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.066657   19703 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.066666   19703 pod_ready.go:82] duration metric: took 3.89498ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.066671   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.066702   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:03:10.066707   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.066716   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.066720   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.068481   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.068975   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.068982   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.068988   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.068993   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.070316   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.070596   19703 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.070604   19703 pod_ready.go:82] duration metric: took 3.927721ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.070613   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.246069   19703 request.go:632] Waited for 175.411357ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:03:10.246135   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:03:10.246143   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.246150   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.246157   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.248509   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.445735   19703 request.go:632] Waited for 196.830174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.445773   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:10.445810   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.445820   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.445825   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.447694   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.448028   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.448037   19703 pod_ready.go:82] duration metric: took 377.422873ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.448044   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.645309   19703 request.go:632] Waited for 197.231613ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:03:10.645351   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:03:10.645387   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.645393   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.645398   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.647385   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:03:10.845570   19703 request.go:632] Waited for 197.573578ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.845611   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:10.845619   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:10.845632   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:10.845641   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:10.848369   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:10.848765   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:10.848776   19703 pod_ready.go:82] duration metric: took 400.729678ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:10.848783   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.046537   19703 request.go:632] Waited for 197.717054ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:03:11.046604   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:03:11.046612   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.046621   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.046628   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.048951   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:11.246799   19703 request.go:632] Waited for 197.304848ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:11.246915   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:11.246922   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.246932   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.246938   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.249732   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:11.250168   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:11.250177   19703 pod_ready.go:82] duration metric: took 401.392962ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.250184   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.446428   19703 request.go:632] Waited for 196.056309ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:03:11.446480   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:03:11.446489   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.446499   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.446505   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.449627   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:11.646969   19703 request.go:632] Waited for 196.797375ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:11.647076   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:11.647092   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.647104   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.647110   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.650574   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:11.650948   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:11.650974   19703 pod_ready.go:82] duration metric: took 400.789912ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.650982   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:11.845660   19703 request.go:632] Waited for 194.624945ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:03:11.845696   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:03:11.845702   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:11.845710   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:11.845715   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:11.848177   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.045843   19703 request.go:632] Waited for 197.194177ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:12.045884   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:12.045890   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.045898   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.045904   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.048353   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.048691   19703 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.048700   19703 pod_ready.go:82] duration metric: took 397.716056ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.048712   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.247441   19703 request.go:632] Waited for 198.639286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:03:12.247526   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:03:12.247535   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.247546   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.247553   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.251277   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:12.445804   19703 request.go:632] Waited for 193.98688ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.445866   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.445879   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.445891   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.445909   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.448985   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:12.449660   19703 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.449672   19703 pod_ready.go:82] duration metric: took 400.957346ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.449680   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.646023   19703 request.go:632] Waited for 196.29617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:03:12.646138   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:03:12.646148   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.646156   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.646160   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.648881   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.846423   19703 request.go:632] Waited for 197.041377ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.846478   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:03:12.846486   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:12.846495   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:12.846500   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:12.849237   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:12.849526   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:12.849536   19703 pod_ready.go:82] duration metric: took 399.853481ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:12.849543   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:13.046888   19703 request.go:632] Waited for 197.276485ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:03:13.046931   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:03:13.046938   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.046973   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.046978   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.049651   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:13.246632   19703 request.go:632] Waited for 196.567235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:13.246683   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:03:13.246692   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.246727   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.246737   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.249732   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:03:13.250368   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:03:13.250378   19703 pod_ready.go:82] duration metric: took 400.834283ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:03:13.250385   19703 pod_ready.go:39] duration metric: took 3.20263718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:03:13.250407   19703 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:03:13.250476   19703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:03:13.263389   19703 api_server.go:72] duration metric: took 21.576959393s to wait for apiserver process to appear ...
	I1105 10:03:13.263406   19703 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:03:13.263422   19703 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:03:13.267595   19703 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:03:13.267641   19703 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:03:13.267649   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.267658   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.267666   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.268160   19703 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:03:13.268232   19703 api_server.go:141] control plane version: v1.31.2
	I1105 10:03:13.268245   19703 api_server.go:131] duration metric: took 4.83504ms to wait for apiserver health ...
	I1105 10:03:13.268250   19703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:03:13.447320   19703 request.go:632] Waited for 179.029832ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.447393   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.447401   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.447409   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.447414   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.451090   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.454646   19703 system_pods.go:59] 17 kube-system pods found
	I1105 10:03:13.454662   19703 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:03:13.454667   19703 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:03:13.454671   19703 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:03:13.454673   19703 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:03:13.454676   19703 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:03:13.454679   19703 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:03:13.454681   19703 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:03:13.454685   19703 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:03:13.454688   19703 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:03:13.454699   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:03:13.454702   19703 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:03:13.454707   19703 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:03:13.454710   19703 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:03:13.454712   19703 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:03:13.454715   19703 system_pods.go:61] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:03:13.454718   19703 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:03:13.454721   19703 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:03:13.454725   19703 system_pods.go:74] duration metric: took 186.473341ms to wait for pod list to return data ...
	I1105 10:03:13.454731   19703 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:03:13.645590   19703 request.go:632] Waited for 190.785599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:03:13.645629   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:03:13.645636   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.645645   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.645651   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.648706   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.648833   19703 default_sa.go:45] found service account: "default"
	I1105 10:03:13.648842   19703 default_sa.go:55] duration metric: took 194.109049ms for default service account to be created ...
	I1105 10:03:13.648848   19703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:03:13.845301   19703 request.go:632] Waited for 196.413293ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.845347   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:03:13.845354   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:13.845362   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:13.845368   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:13.849295   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:03:13.853094   19703 system_pods.go:86] 17 kube-system pods found
	I1105 10:03:13.853105   19703 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:03:13.853109   19703 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:03:13.853113   19703 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:03:13.853116   19703 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:03:13.853122   19703 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:03:13.853125   19703 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:03:13.853128   19703 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:03:13.853131   19703 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:03:13.853133   19703 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:03:13.853139   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:03:13.853145   19703 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:03:13.853147   19703 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:03:13.853150   19703 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:03:13.853153   19703 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:03:13.853155   19703 system_pods.go:89] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:03:13.853158   19703 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:03:13.853161   19703 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:03:13.853165   19703 system_pods.go:126] duration metric: took 204.31519ms to wait for k8s-apps to be running ...
	I1105 10:03:13.853173   19703 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:03:13.853242   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:03:13.864800   19703 system_svc.go:56] duration metric: took 11.624062ms WaitForService to wait for kubelet
	I1105 10:03:13.864814   19703 kubeadm.go:582] duration metric: took 22.178392392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:03:13.864830   19703 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:03:14.047134   19703 request.go:632] Waited for 182.24401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:03:14.047270   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:03:14.047286   19703 round_trippers.go:469] Request Headers:
	I1105 10:03:14.047300   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:03:14.047306   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:03:14.051327   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:03:14.051979   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:03:14.051996   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:03:14.052008   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:03:14.052011   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:03:14.052014   19703 node_conditions.go:105] duration metric: took 187.182073ms to run NodePressure ...
	I1105 10:03:14.052022   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:03:14.052040   19703 start.go:255] writing updated cluster config ...
	I1105 10:03:14.073950   19703 out.go:201] 
	I1105 10:03:14.095736   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:14.095829   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:14.117446   19703 out.go:177] * Starting "ha-213000-m03" control-plane node in "ha-213000" cluster
	I1105 10:03:14.159672   19703 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:03:14.159707   19703 cache.go:56] Caching tarball of preloaded images
	I1105 10:03:14.159953   19703 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:03:14.159973   19703 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:03:14.160101   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:14.161238   19703 start.go:360] acquireMachinesLock for ha-213000-m03: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:03:14.161402   19703 start.go:364] duration metric: took 132.038µs to acquireMachinesLock for "ha-213000-m03"
	I1105 10:03:14.161444   19703 start.go:93] Provisioning new machine with config: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:03:14.161537   19703 start.go:125] createHost starting for "m03" (driver="hyperkit")
	I1105 10:03:14.203378   19703 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:03:14.203509   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:14.203540   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:14.215464   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57709
	I1105 10:03:14.215800   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:14.216175   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:14.216187   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:14.216413   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:14.216532   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:14.216639   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:14.216755   19703 start.go:159] libmachine.API.Create for "ha-213000" (driver="hyperkit")
	I1105 10:03:14.216774   19703 client.go:168] LocalClient.Create starting
	I1105 10:03:14.216804   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:03:14.216876   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:03:14.216886   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:03:14.216927   19703 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:03:14.216976   19703 main.go:141] libmachine: Decoding PEM data...
	I1105 10:03:14.216986   19703 main.go:141] libmachine: Parsing certificate...
	I1105 10:03:14.217000   19703 main.go:141] libmachine: Running pre-create checks...
	I1105 10:03:14.217004   19703 main.go:141] libmachine: (ha-213000-m03) Calling .PreCreateCheck
	I1105 10:03:14.217109   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:14.217166   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:14.217654   19703 main.go:141] libmachine: Creating machine...
	I1105 10:03:14.217662   19703 main.go:141] libmachine: (ha-213000-m03) Calling .Create
	I1105 10:03:14.217732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:14.217901   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.217732   19773 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:03:14.217969   19703 main.go:141] libmachine: (ha-213000-m03) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:03:14.490580   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.490490   19773 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa...
	I1105 10:03:14.554451   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.554363   19773 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk...
	I1105 10:03:14.554467   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Writing magic tar header
	I1105 10:03:14.554475   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Writing SSH key tar header
	I1105 10:03:14.555306   19703 main.go:141] libmachine: (ha-213000-m03) DBG | I1105 10:03:14.555244   19773 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03 ...
	I1105 10:03:15.030518   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:15.030575   19703 main.go:141] libmachine: (ha-213000-m03) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid
	I1105 10:03:15.030627   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Using UUID 9e834d88-ec2a-4703-a798-2d165259ce86
	I1105 10:03:15.063985   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Generated MAC 06:83:5c:e9:cb:34
	I1105 10:03:15.064010   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:03:15.064043   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9e834d88-ec2a-4703-a798-2d165259ce86", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:03:15.064075   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9e834d88-ec2a-4703-a798-2d165259ce86", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d21e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:03:15.064114   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9e834d88-ec2a-4703-a798-2d165259ce86", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:03:15.064146   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9e834d88-ec2a-4703-a798-2d165259ce86 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/ha-213000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:03:15.064163   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:03:15.067111   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 DEBUG: hyperkit: Pid is 19776
	I1105 10:03:15.067572   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 0
	I1105 10:03:15.067585   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:15.067601   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:15.068753   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:15.068832   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:15.068842   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:15.068849   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:15.068858   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:15.068869   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:15.068885   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:15.077682   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:03:15.086555   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:03:15.087707   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:03:15.087735   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:03:15.087751   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:03:15.087769   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:03:15.487970   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:03:15.487985   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:03:15.602906   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:03:15.602926   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:03:15.602934   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:03:15.602939   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:03:15.603764   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:03:15.603775   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:03:17.070391   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 1
	I1105 10:03:17.070406   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:17.070484   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:17.071430   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:17.071487   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:17.071507   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:17.071517   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:17.071526   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:17.071533   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:17.071540   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:19.071643   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 2
	I1105 10:03:19.071657   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:19.071732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:19.072732   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:19.072790   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:19.072797   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:19.072820   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:19.072832   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:19.072839   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:19.072847   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:21.074196   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 3
	I1105 10:03:21.074212   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:21.074292   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:21.075239   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:21.075306   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:21.075318   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:21.075336   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:21.075342   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:21.075348   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:21.075356   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:21.396531   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:03:21.396580   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:03:21.396612   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:03:21.420738   19703 main.go:141] libmachine: (ha-213000-m03) DBG | 2024/11/05 10:03:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:03:23.075524   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 4
	I1105 10:03:23.075538   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:23.075648   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:23.076609   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:23.076667   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 5 entries in /var/db/dhcpd_leases!
	I1105 10:03:23.076676   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6b96}
	I1105 10:03:23.076684   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6b6b}
	I1105 10:03:23.076690   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:03:23.076697   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:03:23.076705   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:03:25.077448   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Attempt 5
	I1105 10:03:25.077468   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:25.077588   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:25.078838   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Searching for 06:83:5c:e9:cb:34 in /var/db/dhcpd_leases ...
	I1105 10:03:25.078950   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found 6 entries in /var/db/dhcpd_leases!
	I1105 10:03:25.078960   19703 main.go:141] libmachine: (ha-213000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a6bfc}
	I1105 10:03:25.078966   19703 main.go:141] libmachine: (ha-213000-m03) DBG | Found match: 06:83:5c:e9:cb:34
	I1105 10:03:25.078970   19703 main.go:141] libmachine: (ha-213000-m03) DBG | IP: 192.169.0.7
	I1105 10:03:25.079034   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:25.079648   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:25.079753   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:25.079858   19703 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:03:25.079867   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:03:25.079968   19703 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:25.080027   19703 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:03:25.081028   19703 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:03:25.081037   19703 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:03:25.081043   19703 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:03:25.081047   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:25.081134   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:25.081211   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:25.081299   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:25.081396   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:25.081991   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:25.082321   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:25.082330   19703 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:03:26.133521   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:03:26.133535   19703 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:03:26.133540   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.133696   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.133825   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.133956   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.134044   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.134222   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.134364   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.134372   19703 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:03:26.183718   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:03:26.183767   19703 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:03:26.183774   19703 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:03:26.183779   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.183908   19703 buildroot.go:166] provisioning hostname "ha-213000-m03"
	I1105 10:03:26.183917   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.184015   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.184096   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.184192   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.184276   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.184357   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.184497   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.184635   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.184643   19703 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m03 && echo "ha-213000-m03" | sudo tee /etc/hostname
	I1105 10:03:26.245764   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m03
	
	I1105 10:03:26.245779   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.245911   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.246034   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.246135   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.246225   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.246371   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.246514   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.246525   19703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:03:26.304895   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:03:26.304911   19703 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:03:26.304922   19703 buildroot.go:174] setting up certificates
	I1105 10:03:26.304929   19703 provision.go:84] configureAuth start
	I1105 10:03:26.304936   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetMachineName
	I1105 10:03:26.305070   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:26.305166   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.305256   19703 provision.go:143] copyHostCerts
	I1105 10:03:26.305284   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:03:26.305330   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:03:26.305336   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:03:26.305479   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:03:26.305706   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:03:26.305741   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:03:26.305746   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:03:26.305833   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:03:26.305989   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:03:26.306040   19703 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:03:26.306045   19703 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:03:26.306127   19703 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:03:26.306297   19703 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m03 san=[127.0.0.1 192.169.0.7 ha-213000-m03 localhost minikube]
	I1105 10:03:26.464060   19703 provision.go:177] copyRemoteCerts
	I1105 10:03:26.464124   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:03:26.464140   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.464292   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.464393   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.464474   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.464559   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:26.496436   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:03:26.496516   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:03:26.516600   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:03:26.516672   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:03:26.535607   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:03:26.535680   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:03:26.556861   19703 provision.go:87] duration metric: took 251.926291ms to configureAuth
	I1105 10:03:26.556882   19703 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:03:26.557331   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:26.557344   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:26.557488   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.557585   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.557665   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.557758   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.557840   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.557971   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.558096   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.558106   19703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:03:26.608963   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:03:26.608976   19703 buildroot.go:70] root file system type: tmpfs
	I1105 10:03:26.609053   19703 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:03:26.609068   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.609212   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.609317   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.609404   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.609483   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.609628   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.609762   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.609808   19703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:03:26.670604   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:03:26.670621   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:26.670766   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:26.670854   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.670959   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:26.671050   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:26.671201   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:26.671337   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:26.671349   19703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:03:28.299545   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:03:28.299559   19703 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:03:28.299574   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetURL
	I1105 10:03:28.299721   19703 main.go:141] libmachine: Docker is up and running!
	I1105 10:03:28.299730   19703 main.go:141] libmachine: Reticulating splines...
	I1105 10:03:28.299735   19703 client.go:171] duration metric: took 14.083083071s to LocalClient.Create
	I1105 10:03:28.299751   19703 start.go:167] duration metric: took 14.083123931s to libmachine.API.Create "ha-213000"
	I1105 10:03:28.299756   19703 start.go:293] postStartSetup for "ha-213000-m03" (driver="hyperkit")
	I1105 10:03:28.299763   19703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:03:28.299775   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.299931   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:03:28.299943   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.300030   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.300114   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.300191   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.300269   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:28.335699   19703 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:03:28.339827   19703 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:03:28.339839   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:03:28.339952   19703 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:03:28.340166   19703 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:03:28.340173   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:03:28.340432   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:03:28.353898   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:03:28.384126   19703 start.go:296] duration metric: took 84.362542ms for postStartSetup
	I1105 10:03:28.384153   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetConfigRaw
	I1105 10:03:28.384834   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:28.385024   19703 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:03:28.385403   19703 start.go:128] duration metric: took 14.223987778s to createHost
	I1105 10:03:28.385418   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.385511   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.385585   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.385675   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.385752   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.385869   19703 main.go:141] libmachine: Using SSH client type: native
	I1105 10:03:28.385999   19703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5620] 0x102e8300 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I1105 10:03:28.386006   19703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:03:28.435792   19703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730829808.714335766
	
	I1105 10:03:28.435806   19703 fix.go:216] guest clock: 1730829808.714335766
	I1105 10:03:28.435811   19703 fix.go:229] Guest: 2024-11-05 10:03:28.714335766 -0800 PST Remote: 2024-11-05 10:03:28.385413 -0800 PST m=+159.952313720 (delta=328.922766ms)
	I1105 10:03:28.435825   19703 fix.go:200] guest clock delta is within tolerance: 328.922766ms
	I1105 10:03:28.435829   19703 start.go:83] releasing machines lock for "ha-213000-m03", held for 14.274546252s
	I1105 10:03:28.435845   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.435975   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:28.463026   19703 out.go:177] * Found network options:
	I1105 10:03:28.524451   19703 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:03:28.550710   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:03:28.550742   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:03:28.550759   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.551499   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:03:28.551696   19703 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	W1105 10:03:28.551855   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:03:28.551871   19703 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:03:28.551938   19703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:03:28.551950   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.552047   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.552054   19703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:03:28.552073   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:03:28.552162   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.552174   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:03:28.552281   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.552298   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:03:28.552403   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:03:28.552430   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:03:28.552508   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	W1105 10:03:28.623667   19703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:03:28.623763   19703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:03:28.636396   19703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:03:28.636411   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:03:28.636477   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:03:28.651269   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:03:28.659803   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:03:28.668221   19703 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:03:28.668301   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:03:28.676635   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:03:28.684733   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:03:28.693062   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:03:28.701350   19703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:03:28.709600   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:03:28.717536   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:03:28.725790   19703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:03:28.734256   19703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:03:28.741810   19703 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:03:28.741868   19703 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:03:28.750498   19703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:03:28.757777   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:28.848477   19703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:03:28.867603   19703 start.go:495] detecting cgroup driver to use...
	I1105 10:03:28.867693   19703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:03:28.882469   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:03:28.893733   19703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:03:28.910872   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:03:28.921618   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:03:28.931860   19703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:03:28.955674   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:03:28.966135   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:03:28.981687   19703 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:03:28.984719   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:03:28.992276   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:03:29.007094   19703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:03:29.103508   19703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:03:29.207614   19703 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:03:29.207637   19703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:03:29.221804   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:29.326678   19703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:03:31.637809   19703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31112896s)
	I1105 10:03:31.637894   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:03:31.648270   19703 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:03:31.661112   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:03:31.671447   19703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:03:31.763823   19703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:03:31.864111   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:31.960037   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:03:31.972710   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:03:31.983457   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:32.073613   19703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:03:32.131634   19703 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:03:32.132384   19703 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:03:32.136690   19703 start.go:563] Will wait 60s for crictl version
	I1105 10:03:32.136768   19703 ssh_runner.go:195] Run: which crictl
	I1105 10:03:32.139750   19703 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:03:32.167666   19703 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:03:32.167752   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:03:32.185621   19703 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:03:32.230649   19703 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:03:32.272942   19703 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:03:32.316102   19703 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I1105 10:03:32.337095   19703 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:03:32.337555   19703 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:03:32.342111   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:03:32.352735   19703 mustload.go:65] Loading cluster: ha-213000
	I1105 10:03:32.352915   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:03:32.353165   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:32.353188   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:32.364602   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57732
	I1105 10:03:32.364908   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:32.365258   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:32.365274   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:32.365482   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:32.365613   19703 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:03:32.365706   19703 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:03:32.365791   19703 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:03:32.366950   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:03:32.367212   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:32.367238   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:32.378822   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57734
	I1105 10:03:32.379151   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:32.379474   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:32.379486   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:32.379723   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:32.379829   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:03:32.379937   19703 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.7
	I1105 10:03:32.379942   19703 certs.go:194] generating shared ca certs ...
	I1105 10:03:32.379956   19703 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.380143   19703 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:03:32.380237   19703 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:03:32.380246   19703 certs.go:256] generating profile certs ...
	I1105 10:03:32.380342   19703 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:03:32.380361   19703 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9
	I1105 10:03:32.380396   19703 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I1105 10:03:32.531495   19703 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 ...
	I1105 10:03:32.531519   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9: {Name:mked1b883793443cd41069aa04846ce3d13e3cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.531897   19703 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9 ...
	I1105 10:03:32.531907   19703 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9: {Name:mkc6838eeb283dd1eaf268f9b1d512c474d2ec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:03:32.532158   19703 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.7ae243e9 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:03:32.532364   19703 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.7ae243e9 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:03:32.532662   19703 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:03:32.532672   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:03:32.532701   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:03:32.532722   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:03:32.532741   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:03:32.532759   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:03:32.532779   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:03:32.532797   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:03:32.532819   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:03:32.532921   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:03:32.532977   19703 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:03:32.532985   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:03:32.533022   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:03:32.533055   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:03:32.533086   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:03:32.533156   19703 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:03:32.533192   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:03:32.533220   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.533241   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.533273   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:03:32.533416   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:03:32.533504   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:03:32.533579   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:03:32.533666   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:03:32.562870   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:03:32.566125   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:03:32.574941   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:03:32.577997   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:03:32.590640   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:03:32.593905   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:03:32.603441   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:03:32.607210   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:03:32.616800   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:03:32.620077   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:03:32.629598   19703 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:03:32.632721   19703 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:03:32.641637   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:03:32.661020   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:03:32.681195   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:03:32.700777   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:03:32.719964   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1105 10:03:32.740252   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:03:32.759642   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:03:32.778570   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:03:32.798835   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:03:32.818449   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:03:32.837230   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:03:32.856822   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:03:32.870143   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:03:32.883780   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:03:32.897915   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:03:32.911578   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:03:32.925103   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:03:32.938796   19703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:03:32.953077   19703 ssh_runner.go:195] Run: openssl version
	I1105 10:03:32.957362   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:03:32.966865   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.970193   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.970241   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:03:32.974304   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:03:32.983690   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:03:32.993172   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.996898   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:03:32.996967   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:03:33.001371   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:03:33.010757   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:03:33.020281   19703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.023901   19703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.023960   19703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:03:33.028229   19703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:03:33.038224   19703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:03:33.041604   19703 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 10:03:33.041640   19703 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.2 docker true true} ...
	I1105 10:03:33.041693   19703 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:03:33.041717   19703 kube-vip.go:115] generating kube-vip config ...
	I1105 10:03:33.041764   19703 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:03:33.054458   19703 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:03:33.054499   19703 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:03:33.054566   19703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:03:33.063806   19703 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1105 10:03:33.063883   19703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1105 10:03:33.072691   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1105 10:03:33.072692   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1105 10:03:33.072707   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:03:33.072712   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:03:33.072691   19703 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1105 10:03:33.072776   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:03:33.072833   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1105 10:03:33.072833   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1105 10:03:33.084670   19703 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:03:33.084705   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1105 10:03:33.084732   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1105 10:03:33.084803   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1105 10:03:33.084830   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1105 10:03:33.084849   19703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1105 10:03:33.112916   19703 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1105 10:03:33.112953   19703 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1105 10:03:33.638088   19703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:03:33.646375   19703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:03:33.662178   19703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:03:33.676109   19703 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:03:33.690081   19703 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:03:33.693205   19703 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:03:33.703729   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:03:33.801135   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:03:33.817719   19703 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:03:33.818043   19703 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:03:33.818070   19703 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:03:33.829754   19703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57737
	I1105 10:03:33.830093   19703 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:03:33.830434   19703 main.go:141] libmachine: Using API Version  1
	I1105 10:03:33.830444   19703 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:03:33.830649   19703 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:03:33.830744   19703 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:03:33.830850   19703 start.go:317] joinCluster: &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:03:33.830949   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1105 10:03:33.830963   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:03:33.831038   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:03:33.831160   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:03:33.831264   19703 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:03:33.831351   19703 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:03:33.915396   19703 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:03:33.915427   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3af4oc.mrofw1iihstmy2lp --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m03 --control-plane --apiserver-advertise-address=192.169.0.7 --apiserver-bind-port=8443"
	I1105 10:04:05.463364   19703 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3af4oc.mrofw1iihstmy2lp --discovery-token-ca-cert-hash sha256:2aaa6cfcc57cd555da7aed58a5e5ed7a34a7fb597dea4022fdf5920ac62a4564 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-213000-m03 --control-plane --apiserver-advertise-address=192.169.0.7 --apiserver-bind-port=8443": (31.548185064s)
	I1105 10:04:05.463394   19703 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1105 10:04:05.926039   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-213000-m03 minikube.k8s.io/updated_at=2024_11_05T10_04_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=ha-213000 minikube.k8s.io/primary=false
	I1105 10:04:06.005817   19703 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-213000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1105 10:04:06.089769   19703 start.go:319] duration metric: took 32.259206586s to joinCluster
	I1105 10:04:06.089835   19703 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:04:06.090023   19703 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:04:06.144884   19703 out.go:177] * Verifying Kubernetes components...
	I1105 10:04:06.218462   19703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:04:06.491890   19703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:04:06.522724   19703 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:04:06.522981   19703 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x11e86e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:04:06.523027   19703 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:04:06.523216   19703 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m03" to be "Ready" ...
	I1105 10:04:06.523272   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:06.523278   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:06.523284   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:06.523289   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:06.525762   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:07.024768   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:07.024785   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:07.024792   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:07.024796   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:07.026838   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:07.523802   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:07.523816   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:07.523823   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:07.523827   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:07.525952   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.024543   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:08.024558   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:08.024565   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:08.024567   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:08.026818   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.523406   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:08.523421   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:08.523428   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:08.523431   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:08.525613   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:08.526028   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:09.023489   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:09.023507   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:09.023515   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:09.023518   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:09.025718   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:09.524490   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:09.524507   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:09.524536   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:09.524542   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:09.526550   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:10.024748   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:10.024763   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:10.024770   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:10.024773   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:10.026854   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:10.523405   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:10.523428   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:10.523434   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:10.523438   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:10.525879   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:10.526310   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:11.024586   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:11.024601   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:11.024608   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:11.024611   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:11.026801   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:11.524591   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:11.524616   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:11.524627   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:11.524633   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:11.528710   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:12.023846   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:12.023871   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:12.023892   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:12.023899   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:12.026343   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:12.523660   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:12.523678   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:12.523687   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:12.523692   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:12.526168   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:12.526545   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:13.024493   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:13.024549   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:13.024558   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:13.024562   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:13.026612   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:13.523333   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:13.523377   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:13.523386   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:13.523391   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:13.526060   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.023778   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:14.023813   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:14.023821   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:14.023826   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:14.026277   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.524791   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:14.524807   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:14.524814   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:14.524818   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:14.526944   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:14.527389   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:15.024235   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:15.024251   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:15.024257   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:15.024261   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:15.026360   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:15.523297   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:15.523315   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:15.523340   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:15.523344   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:15.525650   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.024088   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:16.024104   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:16.024111   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:16.024114   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:16.026234   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.524762   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:16.524781   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:16.524790   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:16.524794   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:16.527186   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:16.527528   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:17.024111   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:17.024127   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:17.024133   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:17.024137   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:17.026641   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:17.523462   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:17.523498   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:17.523506   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:17.523509   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:17.525790   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:18.023254   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:18.023271   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:18.023277   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:18.023280   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:18.025709   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:18.523323   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:18.523337   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:18.523343   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:18.523347   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:18.526016   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:19.023466   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:19.023481   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:19.023498   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:19.023501   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:19.026019   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:19.026436   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:19.523232   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:19.523250   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:19.523258   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:19.523262   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:19.525574   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:20.025183   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:20.025202   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:20.025211   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:20.025217   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:20.027796   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:20.524157   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:20.524199   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:20.524209   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:20.524214   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:20.526298   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:21.023312   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:21.023328   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:21.023335   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:21.023338   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:21.025776   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:21.525084   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:21.525110   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:21.525123   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:21.525129   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:21.528173   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:21.528632   19703 node_ready.go:53] node "ha-213000-m03" has status "Ready":"False"
	I1105 10:04:22.023245   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.023263   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.023272   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.023276   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.025668   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.524560   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.524575   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.524580   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.524582   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.526635   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.527110   19703 node_ready.go:49] node "ha-213000-m03" has status "Ready":"True"
	I1105 10:04:22.527120   19703 node_ready.go:38] duration metric: took 16.004036788s for node "ha-213000-m03" to be "Ready" ...
	I1105 10:04:22.527128   19703 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:04:22.527166   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:22.527172   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.527177   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.527182   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.533505   19703 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:04:22.539225   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.539271   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:04:22.539277   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.539283   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.539289   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.541288   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.541882   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.541890   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.541895   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.541898   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.543858   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.544129   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.544138   19703 pod_ready.go:82] duration metric: took 4.901387ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.544145   19703 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.544181   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:04:22.544186   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.544191   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.544195   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.545938   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.546421   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.546429   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.546436   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.546439   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.548600   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.548988   19703 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.548997   19703 pod_ready.go:82] duration metric: took 4.847138ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.549007   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.549053   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:04:22.549059   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.549065   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.549067   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.550912   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.551584   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:22.551591   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.551597   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.551600   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.553276   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.553666   19703 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.553676   19703 pod_ready.go:82] duration metric: took 4.662923ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.553683   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.553721   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:04:22.553726   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.553732   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.553735   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.555620   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.556112   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:22.556119   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.556124   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.556128   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.557964   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:22.558427   19703 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:22.558437   19703 pod_ready.go:82] duration metric: took 4.748625ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.558444   19703 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:22.725676   19703 request.go:632] Waited for 167.192719ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:22.725734   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:22.725741   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.725750   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.725757   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.728337   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:22.924860   19703 request.go:632] Waited for 196.058895ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.925006   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:22.925029   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:22.925044   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:22.925054   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:22.929161   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:23.125347   19703 request.go:632] Waited for 65.258433ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.125410   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.125417   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.125424   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.125429   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.128075   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.326631   19703 request.go:632] Waited for 198.115235ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.326702   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.326713   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.326721   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.326726   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.329257   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.559388   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:23.559410   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.559419   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.559423   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.561604   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:23.724668   19703 request.go:632] Waited for 162.701435ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.724727   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:23.724733   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:23.724740   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:23.724746   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:23.726755   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:24.059136   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:04:24.059155   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.059188   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.059194   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.061686   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:24.125899   19703 request.go:632] Waited for 63.704238ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:24.126005   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:24.126016   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.126028   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.126034   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.129471   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:24.129800   19703 pod_ready.go:93] pod "etcd-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.129809   19703 pod_ready.go:82] duration metric: took 1.571374275s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.129820   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.325915   19703 request.go:632] Waited for 196.033511ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:04:24.326035   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:04:24.326046   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.326057   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.326064   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.329258   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:24.525894   19703 request.go:632] Waited for 195.976303ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:24.525950   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:24.525957   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.525965   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.525970   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.531038   19703 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:04:24.531331   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.531341   19703 pod_ready.go:82] duration metric: took 401.519758ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.531348   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.725411   19703 request.go:632] Waited for 194.029144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:04:24.725452   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:04:24.725457   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.725484   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.725488   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.727336   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:24.924946   19703 request.go:632] Waited for 197.104111ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:24.925003   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:24.925010   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:24.925018   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:24.925024   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:24.927806   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:24.928044   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:24.928052   19703 pod_ready.go:82] duration metric: took 396.702505ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:24.928062   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.125637   19703 request.go:632] Waited for 197.516414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:04:25.125722   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:04:25.125731   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.125739   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.125747   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.128388   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.325342   19703 request.go:632] Waited for 196.567129ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:25.325384   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:25.325390   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.325430   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.325437   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.327703   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.327989   19703 pod_ready.go:93] pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:25.327998   19703 pod_ready.go:82] duration metric: took 399.934252ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.328005   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.526534   19703 request.go:632] Waited for 198.484556ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:04:25.526593   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:04:25.526601   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.526608   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.526614   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.528989   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.725913   19703 request.go:632] Waited for 196.422028ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:25.725987   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:25.725997   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.726008   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.726031   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.728724   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:25.729094   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:25.729103   19703 pod_ready.go:82] duration metric: took 401.096776ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.729112   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:25.924767   19703 request.go:632] Waited for 195.60365ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:04:25.924865   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:04:25.924875   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:25.924888   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:25.924896   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:25.928404   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:26.125908   19703 request.go:632] Waited for 196.895961ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:26.125983   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:26.125991   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.125999   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.126005   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.128293   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.128631   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.128641   19703 pod_ready.go:82] duration metric: took 399.525738ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.128647   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.324632   19703 request.go:632] Waited for 195.949532ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:04:26.324692   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:04:26.324698   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.324704   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.324708   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.326997   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.525533   19703 request.go:632] Waited for 198.105799ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.525578   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.525606   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.525616   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.525621   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.529215   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:26.529581   19703 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.529590   19703 pod_ready.go:82] duration metric: took 400.941913ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.529597   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.726009   19703 request.go:632] Waited for 196.373053ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:04:26.726076   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:04:26.726082   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.726088   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.726092   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.728138   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.925481   19703 request.go:632] Waited for 196.839411ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.925524   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:26.925543   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:26.925555   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:26.925559   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:26.927642   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:26.927909   19703 pod_ready.go:93] pod "kube-proxy-5ldvg" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:26.927918   19703 pod_ready.go:82] duration metric: took 398.31947ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:26.927925   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.124645   19703 request.go:632] Waited for 196.662774ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:04:27.124698   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:04:27.124740   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.124753   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.124761   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.128295   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:27.325739   19703 request.go:632] Waited for 196.804785ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:27.325845   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:27.325854   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.325862   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.325867   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.328452   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.328710   19703 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:27.328719   19703 pod_ready.go:82] duration metric: took 400.792251ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.328725   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.525473   19703 request.go:632] Waited for 196.70325ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:04:27.525570   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:04:27.525581   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.525593   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.525602   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.528326   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.725203   19703 request.go:632] Waited for 196.519889ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:27.725279   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:27.725285   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.725292   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.725297   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.727708   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:27.728131   19703 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:27.728140   19703 pod_ready.go:82] duration metric: took 399.413452ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.728146   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:27.924670   19703 request.go:632] Waited for 196.486132ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:04:27.924745   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:04:27.924768   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:27.924780   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:27.924785   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:27.926872   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.126299   19703 request.go:632] Waited for 199.099089ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:28.126434   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:04:28.126444   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.126455   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.126469   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.129846   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:28.130229   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.130241   19703 pod_ready.go:82] duration metric: took 402.092729ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.130250   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.325028   19703 request.go:632] Waited for 194.730914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:04:28.325106   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:04:28.325115   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.325127   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.325137   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.327834   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.524776   19703 request.go:632] Waited for 196.527612ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:28.524860   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:04:28.524877   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.524889   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.524897   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.528055   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:28.528583   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.528595   19703 pod_ready.go:82] duration metric: took 398.343246ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.528604   19703 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.724665   19703 request.go:632] Waited for 196.022312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:04:28.724698   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:04:28.724704   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.724714   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.724740   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.726671   19703 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:04:28.924585   19703 request.go:632] Waited for 197.482088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:28.924621   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:04:28.924626   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.924638   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.924641   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.927175   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:28.927434   19703 pod_ready.go:93] pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 10:04:28.927445   19703 pod_ready.go:82] duration metric: took 398.83876ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:04:28.927453   19703 pod_ready.go:39] duration metric: took 6.40037569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:04:28.927464   19703 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:04:28.927539   19703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:04:28.939767   19703 api_server.go:72] duration metric: took 22.850118644s to wait for apiserver process to appear ...
	I1105 10:04:28.939780   19703 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:04:28.939792   19703 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:04:28.942841   19703 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:04:28.942878   19703 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:04:28.942883   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:28.942889   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:28.942894   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:28.943424   19703 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:04:28.943458   19703 api_server.go:141] control plane version: v1.31.2
	I1105 10:04:28.943466   19703 api_server.go:131] duration metric: took 3.681494ms to wait for apiserver health ...
	I1105 10:04:28.943471   19703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:04:29.125181   19703 request.go:632] Waited for 181.649913ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.125250   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.125257   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.125265   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.125273   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.129049   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:29.134047   19703 system_pods.go:59] 24 kube-system pods found
	I1105 10:04:29.134060   19703 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:04:29.134064   19703 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:04:29.134067   19703 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:04:29.134070   19703 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:04:29.134073   19703 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:04:29.134076   19703 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:04:29.134078   19703 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:04:29.134083   19703 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:04:29.134089   19703 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:04:29.134092   19703 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:04:29.134095   19703 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:04:29.134098   19703 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:04:29.134101   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:04:29.134103   19703 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:04:29.134106   19703 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:04:29.134109   19703 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:04:29.134113   19703 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:04:29.134116   19703 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:04:29.134119   19703 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:04:29.134121   19703 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:04:29.134124   19703 system_pods.go:61] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:04:29.134126   19703 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:04:29.134129   19703 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:04:29.134131   19703 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:04:29.134136   19703 system_pods.go:74] duration metric: took 190.663227ms to wait for pod list to return data ...
	I1105 10:04:29.134141   19703 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:04:29.325174   19703 request.go:632] Waited for 190.972254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:04:29.325306   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:04:29.325317   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.325328   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.325334   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.328806   19703 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:04:29.328877   19703 default_sa.go:45] found service account: "default"
	I1105 10:04:29.328886   19703 default_sa.go:55] duration metric: took 194.742768ms for default service account to be created ...
	I1105 10:04:29.328892   19703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:04:29.525825   19703 request.go:632] Waited for 196.894286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.525885   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:04:29.525891   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.525900   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.525906   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.530238   19703 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:04:29.535151   19703 system_pods.go:86] 24 kube-system pods found
	I1105 10:04:29.535162   19703 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:04:29.535166   19703 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:04:29.535169   19703 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:04:29.535173   19703 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running
	I1105 10:04:29.535176   19703 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:04:29.535179   19703 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:04:29.535182   19703 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running
	I1105 10:04:29.535186   19703 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:04:29.535189   19703 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:04:29.535192   19703 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running
	I1105 10:04:29.535195   19703 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:04:29.535198   19703 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:04:29.535203   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running
	I1105 10:04:29.535206   19703 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:04:29.535209   19703 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:04:29.535212   19703 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:04:29.535214   19703 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:04:29.535217   19703 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:04:29.535220   19703 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running
	I1105 10:04:29.535224   19703 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:04:29.535226   19703 system_pods.go:89] "kube-vip-ha-213000" [970e81e4-8295-4cc4-9b62-b943e6e6a003] Running
	I1105 10:04:29.535229   19703 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:04:29.535232   19703 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:04:29.535236   19703 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:04:29.535241   19703 system_pods.go:126] duration metric: took 206.346852ms to wait for k8s-apps to be running ...
	I1105 10:04:29.535246   19703 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:04:29.535311   19703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:04:29.546979   19703 system_svc.go:56] duration metric: took 11.728241ms WaitForService to wait for kubelet
	I1105 10:04:29.546999   19703 kubeadm.go:582] duration metric: took 23.457354958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:04:29.547010   19703 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:04:29.724995   19703 request.go:632] Waited for 177.933168ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:04:29.725067   19703 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:04:29.725074   19703 round_trippers.go:469] Request Headers:
	I1105 10:04:29.725082   19703 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:04:29.725088   19703 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:04:29.727706   19703 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:04:29.728430   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728439   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728446   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728449   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728453   19703 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:04:29.728456   19703 node_conditions.go:123] node cpu capacity is 2
	I1105 10:04:29.728459   19703 node_conditions.go:105] duration metric: took 181.447674ms to run NodePressure ...
	I1105 10:04:29.728466   19703 start.go:241] waiting for startup goroutines ...
	I1105 10:04:29.728479   19703 start.go:255] writing updated cluster config ...
	I1105 10:04:29.729489   19703 ssh_runner.go:195] Run: rm -f paused
	I1105 10:04:29.979871   19703 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1105 10:04:30.017888   19703 out.go:177] * Done! kubectl is now configured to use "ha-213000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d756c554cb1804008ed0d83f76add780a56ab524ce9ad727444994833786ca2/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14a06ee63dae33c8dba35c6c5dae9567da2ca60899210abc9f317c0880b139fc/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:01:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc924e17f3bb751fc1e52153e5ef02a65f98bbb979139ce33eaa22d0798983b8/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967239546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967309107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967317540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:50 ha-213000 dockerd[1237]: time="2024-11-05T18:01:50.967390804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107710141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107910037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.107968019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.108244444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119482556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119770623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.119883235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:01:51 ha-213000 dockerd[1237]: time="2024-11-05T18:01:51.120049510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.619993345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620106148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620120050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 dockerd[1237]: time="2024-11-05T18:04:31.620209774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:31 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:04:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a852f09c7c372466d6eaee2bbf93a0549f278dabf6e08a4bff1ae7c770405574/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 05 18:04:33 ha-213000 cri-dockerd[1127]: time="2024-11-05T18:04:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358121990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358264406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358298713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:04:33 ha-213000 dockerd[1237]: time="2024-11-05T18:04:33.358445332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	13c126a54f1e3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Running             busybox                   0                   a852f09c7c372       busybox-7dff88458-q5j74
	655c6025b3ad3       c69fa2e9cbf5f                                                                                         6 minutes ago       Running             coredns                   0                   fc924e17f3bb7       coredns-7c65d6cfc9-cv2cc
	478b52af51d4c       c69fa2e9cbf5f                                                                                         6 minutes ago       Running             coredns                   0                   14a06ee63dae3       coredns-7c65d6cfc9-q96rw
	a696b9219e867       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   8d756c554cb18       storage-provisioner
	c15d829a94cc1       kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16              6 minutes ago       Running             kindnet-cni               0                   fc1560dd926ec       kindnet-hppzk
	1707dd1e7b710       505d571f5fd56                                                                                         6 minutes ago       Running             kube-proxy                0                   c677886629450       kube-proxy-s8xxj
	e133549e344f8       ghcr.io/kube-vip/kube-vip@sha256:1ba8e6e7fe678a8779986a6b88a1f391c63f7fe3edd34b167dceed3f66e8c87e     6 minutes ago       Running             kube-vip                  0                   c50c39a35d466       kube-vip-ha-213000
	a3c0c64a3782d       9499c9960544e                                                                                         6 minutes ago       Running             kube-apiserver            0                   c31f45140546c       kube-apiserver-ha-213000
	0ea9be13ab8cd       847c7bc1a5418                                                                                         6 minutes ago       Running             kube-scheduler            0                   e5947d7e736c7       kube-scheduler-ha-213000
	968f538b61d4e       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      0                   75b49749f37e9       etcd-ha-213000
	3abc7a0629ac1       0486b6c53a1b5                                                                                         6 minutes ago       Running             kube-controller-manager   0                   356e1160051cf       kube-controller-manager-ha-213000
	
	
	==> coredns [478b52af51d4] <==
	[INFO] 10.244.0.4:55854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091157s
	[INFO] 10.244.0.4:46292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064292s
	[INFO] 10.244.0.4:40657 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075287s
	[INFO] 10.244.0.4:40797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000047063s
	[INFO] 10.244.0.4:57944 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092384s
	[INFO] 10.244.2.2:46924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091299s
	[INFO] 10.244.2.2:58313 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000054156s
	[INFO] 10.244.2.2:60784 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097833s
	[INFO] 10.244.2.2:45453 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000050266s
	[INFO] 10.244.2.2:34445 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089937s
	[INFO] 10.244.2.2:47005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097467s
	[INFO] 10.244.1.2:50221 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057047s
	[INFO] 10.244.1.2:57677 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089203s
	[INFO] 10.244.0.4:55860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068653s
	[INFO] 10.244.2.2:43135 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074016s
	[INFO] 10.244.2.2:55939 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120434s
	[INFO] 10.244.2.2:50062 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004236s
	[INFO] 10.244.1.2:47130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093036s
	[INFO] 10.244.1.2:36124 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107815s
	[INFO] 10.244.1.2:47802 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000854s
	[INFO] 10.244.1.2:50939 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000076401s
	[INFO] 10.244.0.4:52439 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046668s
	[INFO] 10.244.0.4:59917 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065899s
	[INFO] 10.244.2.2:54610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146586s
	[INFO] 10.244.2.2:44903 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000045712s
	
	
	==> coredns [655c6025b3ad] <==
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41565 - 52279 "HINFO IN 3928448342492679704.6484769811595158491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01055942s
	[INFO] 10.244.1.2:44772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001930473s
	[INFO] 10.244.1.2:55396 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.173730652s
	[INFO] 10.244.1.2:55075 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.046822424s
	[INFO] 10.244.2.2:46916 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000083419s
	[INFO] 10.244.2.2:50720 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010115s
	[INFO] 10.244.1.2:40476 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129724s
	[INFO] 10.244.1.2:38997 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096588s
	[INFO] 10.244.1.2:47386 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084243s
	[INFO] 10.244.0.4:36440 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000654701s
	[INFO] 10.244.0.4:54567 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087223s
	[INFO] 10.244.0.4:51050 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103169s
	[INFO] 10.244.2.2:55487 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00066824s
	[INFO] 10.244.2.2:46388 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075057s
	[INFO] 10.244.1.2:44219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153172s
	[INFO] 10.244.1.2:57067 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012789s
	[INFO] 10.244.0.4:39514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159605s
	[INFO] 10.244.0.4:48601 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049327s
	[INFO] 10.244.0.4:42037 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113025s
	[INFO] 10.244.2.2:54065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100908s
	[INFO] 10.244.0.4:48546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091627s
	[INFO] 10.244.0.4:58260 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121652s
	[INFO] 10.244.2.2:59084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090378s
	[INFO] 10.244.2.2:46960 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000044449s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:01 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a564c48e26a04536b809c68ac140133d
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    a364bf87-b805-465e-9b8e-7bb15a7511fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m31s
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m31s
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m35s
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m42s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s                  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s                  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s                  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                6m11s                  kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:05:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:04:51 +0000   Tue, 05 Nov 2024 18:06:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe9d6fab7c594c258d6faf081338352a
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    648e1173-cbdb-42eb-9fce-79e6f778bcc4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m10s
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m12s
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-213000-m02 status is now: NodeNotReady
	
	
	Name:               ha-213000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:04 +0000   Tue, 05 Nov 2024 18:04:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-213000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 36fce1bb5353483a8c61e47d06795490
	  System UUID:                9e834703-0000-0000-a798-2d165259ce86
	  Boot ID:                    52f0306a-86b9-41a1-bf8e-c6bebad66edd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x9hwg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 etcd-ha-213000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m56s
	  kube-system                 kindnet-trfhn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m58s
	  kube-system                 kube-apiserver-ha-213000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-controller-manager-ha-213000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-5ldvg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-scheduler-ha-213000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 kube-vip-ha-213000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x8 over 3m58s)  kubelet          Node ha-213000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x8 over 3m58s)  kubelet          Node ha-213000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x7 over 3m58s)  kubelet          Node ha-213000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-213000-m03 event: Registered Node ha-213000-m03 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:04:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:05:29 +0000   Tue, 05 Nov 2024 18:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 9dbfab1abbaa466d920d386afdae83f4
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    7277bbeb-aa13-4ef8-b3e3-22ba82158b7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-p4bx6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-m45pk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-213000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.822548] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[Nov 5 18:01] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.371107] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +0.098303] systemd-fstab-generator[518]: Ignoring "noauto" option for root device
	[  +1.766719] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.299452] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.101798] systemd-fstab-generator[851]: Ignoring "noauto" option for root device
	[  +0.116525] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[  +2.427440] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.092436] systemd-fstab-generator[1092]: Ignoring "noauto" option for root device
	[  +0.099684] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.061433] kauditd_printk_skb: 233 callbacks suppressed
	[  +0.078398] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +3.438367] systemd-fstab-generator[1223]: Ignoring "noauto" option for root device
	[  +2.210589] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.377139] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +3.489712] systemd-fstab-generator[1610]: Ignoring "noauto" option for root device
	[  +1.395421] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.860785] systemd-fstab-generator[2101]: Ignoring "noauto" option for root device
	[  +0.083307] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.458645] kauditd_printk_skb: 12 callbacks suppressed
	[ +17.344532] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 5 18:02] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [968f538b61d4] <==
	{"level":"warn","ts":"2024-11-05T18:07:34.774002Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:35.466979Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:35.467077Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.468800Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.468851Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.774640Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:39.774659Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:43.472095Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:43.472267Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:44.775772Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:44.775850Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:47.474289Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:47.474371Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:49.776765Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:49.776821Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:51.478515Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:51.478565Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:54.777722Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:54.777784Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:55.479532Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:55.479668Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:59.481512Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.6:2380/version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:59.481564Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"585aaf1d56a73c02","error":"Get \"https://192.169.0.6:2380/version\": dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:59.778796Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"585aaf1d56a73c02","rtt":"713.08µs","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:07:59.778823Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"585aaf1d56a73c02","rtt":"6.613506ms","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	
	
	==> kernel <==
	 18:08:01 up 7 min,  0 users,  load average: 0.27, 0.34, 0.18
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c15d829a94cc] <==
	I1105 18:07:27.422022       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:37.414652       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:37.414680       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:07:37.414837       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:37.414873       1 main.go:301] handling current node
	I1105 18:07:37.414884       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:37.414910       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:37.414980       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:37.415014       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:47.420917       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:47.420936       1 main.go:301] handling current node
	I1105 18:07:47.420946       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:47.420949       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:47.421047       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:47.421051       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:47.421211       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:47.421219       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:07:57.414448       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:07:57.414512       1 main.go:301] handling current node
	I1105 18:07:57.414523       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:07:57.414528       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:07:57.414708       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:07:57.414734       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:07:57.414784       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:07:57.414811       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a3c0c64a3782] <==
	I1105 18:01:24.508797       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1105 18:01:24.512609       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	I1105 18:01:24.513340       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:01:24.515760       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:01:25.122836       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 18:01:26.576236       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 18:01:26.588108       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1105 18:01:26.594865       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 18:01:30.774314       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1105 18:01:30.832293       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1105 18:04:35.148097       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57771: use of closed network connection
	E1105 18:04:35.373995       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57773: use of closed network connection
	E1105 18:04:35.583158       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57775: use of closed network connection
	E1105 18:04:35.792755       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57777: use of closed network connection
	E1105 18:04:35.994700       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57779: use of closed network connection
	E1105 18:04:36.197292       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57781: use of closed network connection
	E1105 18:04:36.403420       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57783: use of closed network connection
	E1105 18:04:36.603032       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57785: use of closed network connection
	E1105 18:04:36.809398       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57787: use of closed network connection
	E1105 18:04:37.166464       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57790: use of closed network connection
	E1105 18:04:37.374005       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57792: use of closed network connection
	E1105 18:04:37.593134       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57794: use of closed network connection
	E1105 18:04:37.798340       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57796: use of closed network connection
	E1105 18:04:37.996915       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57798: use of closed network connection
	E1105 18:04:38.199615       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:57800: use of closed network connection
	
	
	==> kube-controller-manager [3abc7a0629ac] <==
	I1105 18:04:59.206231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.206302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.622575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:04:59.926061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.639227       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-213000-m04"
	I1105 18:05:00.668381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.863718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:00.938911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:01.879354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000"
	I1105 18:05:01.917330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:02.023557       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:04.331384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m03"
	I1105 18:05:09.430571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.686438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.687560       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-213000-m04"
	I1105 18:05:21.698524       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:21.928429       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:05:29.675436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:06:15.656491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-213000-m04"
	I1105 18:06:15.656514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:15.666046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:15.680720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.448781ms"
	I1105 18:06:15.680907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.823µs"
	I1105 18:06:15.906178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	I1105 18:06:20.788789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m02"
	
	
	==> kube-proxy [1707dd1e7b71] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:01:32.964306       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:01:32.975224       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:01:32.975302       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:01:33.004972       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:01:33.005019       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:01:33.005036       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:01:33.007241       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:01:33.007726       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:01:33.007754       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:01:33.009040       1 config.go:199] "Starting service config controller"
	I1105 18:01:33.009388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:01:33.009596       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:01:33.009623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:01:33.010305       1 config.go:328] "Starting node config controller"
	I1105 18:01:33.010330       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:01:33.110597       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:01:33.110614       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:01:33.110623       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0ea9be13ab8c] <==
	E1105 18:01:24.259190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 18:01:26.254727       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:04:03.280280       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-trfhn\": pod kindnet-trfhn is already assigned to node \"ha-213000-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-trfhn" node="ha-213000-m03"
	E1105 18:04:03.285002       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6f39544f-a014-444c-8ad7-779e1940d254(kube-system/kindnet-trfhn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-trfhn"
	E1105 18:04:03.285696       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-trfhn\": pod kindnet-trfhn is already assigned to node \"ha-213000-m03\"" pod="kube-system/kindnet-trfhn"
	I1105 18:04:03.285865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-trfhn" node="ha-213000-m03"
	I1105 18:04:31.258177       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="69f86bc8-78ea-4277-b688-fd445c4f8f6e" pod="default/busybox-7dff88458-89r49" assumedNode="ha-213000-m02" currentNode="ha-213000-m03"
	I1105 18:04:31.268574       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="3a5c2f7c-8906-4561-8875-8736f45e3fda" pod="default/busybox-7dff88458-x9hwg" assumedNode="ha-213000-m03" currentNode="ha-213000-m02"
	E1105 18:04:31.273427       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-89r49\": pod busybox-7dff88458-89r49 is already assigned to node \"ha-213000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-89r49" node="ha-213000-m03"
	E1105 18:04:31.273527       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 69f86bc8-78ea-4277-b688-fd445c4f8f6e(default/busybox-7dff88458-89r49) was assumed on ha-213000-m03 but assigned to ha-213000-m02" pod="default/busybox-7dff88458-89r49"
	E1105 18:04:31.273547       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-89r49\": pod busybox-7dff88458-89r49 is already assigned to node \"ha-213000-m02\"" pod="default/busybox-7dff88458-89r49"
	I1105 18:04:31.273836       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-89r49" node="ha-213000-m02"
	I1105 18:04:31.281777       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="7f2e1057-5c45-4255-9c8e-d1eba882f2e5" pod="default/busybox-7dff88458-q5j74" assumedNode="ha-213000" currentNode="ha-213000-m03"
	E1105 18:04:31.287338       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x9hwg\": pod busybox-7dff88458-x9hwg is already assigned to node \"ha-213000-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-x9hwg" node="ha-213000-m02"
	E1105 18:04:31.287388       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a5c2f7c-8906-4561-8875-8736f45e3fda(default/busybox-7dff88458-x9hwg) was assumed on ha-213000-m02 but assigned to ha-213000-m03" pod="default/busybox-7dff88458-x9hwg"
	E1105 18:04:31.287401       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x9hwg\": pod busybox-7dff88458-x9hwg is already assigned to node \"ha-213000-m03\"" pod="default/busybox-7dff88458-x9hwg"
	I1105 18:04:31.287615       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x9hwg" node="ha-213000-m03"
	E1105 18:04:31.291529       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-q5j74\": pod busybox-7dff88458-q5j74 is already assigned to node \"ha-213000\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-q5j74" node="ha-213000-m03"
	E1105 18:04:31.291599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7f2e1057-5c45-4255-9c8e-d1eba882f2e5(default/busybox-7dff88458-q5j74) was assumed on ha-213000-m03 but assigned to ha-213000" pod="default/busybox-7dff88458-q5j74"
	E1105 18:04:31.291701       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-q5j74\": pod busybox-7dff88458-q5j74 is already assigned to node \"ha-213000\"" pod="default/busybox-7dff88458-q5j74"
	I1105 18:04:31.291992       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-q5j74" node="ha-213000"
	E1105 18:04:59.242744       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2bx5\": pod kube-proxy-r2bx5 is already assigned to node \"ha-213000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2bx5" node="ha-213000-m04"
	E1105 18:04:59.242812       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2bx5\": pod kube-proxy-r2bx5 is already assigned to node \"ha-213000-m04\"" pod="kube-system/kube-proxy-r2bx5"
	E1105 18:04:59.243648       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4qmgf\": pod kindnet-4qmgf is already assigned to node \"ha-213000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4qmgf" node="ha-213000-m04"
	E1105 18:04:59.243714       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4qmgf\": pod kindnet-4qmgf is already assigned to node \"ha-213000-m04\"" pod="kube-system/kindnet-4qmgf"
	
	
	==> kubelet <==
	Nov 05 18:03:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:04:27 ha-213000 kubelet[2108]: E1105 18:04:27.199259    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:04:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:04:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:04:31 ha-213000 kubelet[2108]: I1105 18:04:31.275013    2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=180.274999296 podStartE2EDuration="3m0.274999296s" podCreationTimestamp="2024-11-05 18:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-05 18:01:51.416826219 +0000 UTC m=+24.377833961" watchObservedRunningTime="2024-11-05 18:04:31.274999296 +0000 UTC m=+184.236007040"
	Nov 05 18:04:31 ha-213000 kubelet[2108]: I1105 18:04:31.374929    2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88s2k\" (UniqueName: \"kubernetes.io/projected/7f2e1057-5c45-4255-9c8e-d1eba882f2e5-kube-api-access-88s2k\") pod \"busybox-7dff88458-q5j74\" (UID: \"7f2e1057-5c45-4255-9c8e-d1eba882f2e5\") " pod="default/busybox-7dff88458-q5j74"
	Nov 05 18:04:34 ha-213000 kubelet[2108]: I1105 18:04:34.308278    2108 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-q5j74" podStartSLOduration=1.766756631 podStartE2EDuration="3.308264067s" podCreationTimestamp="2024-11-05 18:04:31 +0000 UTC" firstStartedPulling="2024-11-05 18:04:31.760833399 +0000 UTC m=+184.721841133" lastFinishedPulling="2024-11-05 18:04:33.302340832 +0000 UTC m=+186.263348569" observedRunningTime="2024-11-05 18:04:34.308023413 +0000 UTC m=+187.269031168" watchObservedRunningTime="2024-11-05 18:04:34.308264067 +0000 UTC m=+187.269271805"
	Nov 05 18:04:36 ha-213000 kubelet[2108]: E1105 18:04:36.603308    2108 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54556->127.0.0.1:37937: write tcp 127.0.0.1:54556->127.0.0.1:37937: write: broken pipe
	Nov 05 18:05:27 ha-213000 kubelet[2108]: E1105 18:05:27.199657    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:05:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:05:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:06:27 ha-213000 kubelet[2108]: E1105 18:06:27.202597    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:06:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:06:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:07:27 ha-213000 kubelet[2108]: E1105 18:07:27.199645    2108 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:07:27 ha-213000 kubelet[2108]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:07:27 ha-213000 kubelet[2108]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (156.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-213000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E1105 10:14:34.020215   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-213000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (2m32.153098467s)

                                                
                                                
-- stdout --
	* [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	* Restarting existing hyperkit VM for "ha-213000" ...
	* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	* Enabled addons: 
	
	* Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	* Restarting existing hyperkit VM for "ha-213000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-213000-m04" worker node in "ha-213000" cluster
	* Restarting existing hyperkit VM for "ha-213000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:12:21.490688   20650 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.490996   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491002   20650 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.491006   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491183   20650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.492670   20650 out.go:352] Setting JSON to false
	I1105 10:12:21.523908   20650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7910,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:12:21.523997   20650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:12:21.546247   20650 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:12:21.588131   20650 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:12:21.588174   20650 notify.go:220] Checking for updates...
	I1105 10:12:21.632868   20650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:21.654057   20650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:12:21.674788   20650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:12:21.696036   20650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:12:21.717022   20650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:12:21.738560   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.739289   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.739362   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.752070   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59007
	I1105 10:12:21.752427   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.752834   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.752843   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.753115   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.753236   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.753425   20650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:12:21.753684   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.753710   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.764480   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59009
	I1105 10:12:21.764817   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.765142   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.765158   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.765399   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.765513   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.796815   20650 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:12:21.838784   20650 start.go:297] selected driver: hyperkit
	I1105 10:12:21.838816   20650 start.go:901] validating driver "hyperkit" against &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.839082   20650 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:12:21.839288   20650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.839546   20650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:12:21.851704   20650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:12:21.858679   20650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.858708   20650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:12:21.864360   20650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:12:21.864394   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:21.864431   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:21.864510   20650 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.864624   20650 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.886086   20650 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:12:21.927848   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:21.927921   20650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:12:21.927965   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:21.928204   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:21.928223   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:21.928393   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:21.929303   20650 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:21.929483   20650 start.go:364] duration metric: took 156.606µs to acquireMachinesLock for "ha-213000"
	I1105 10:12:21.929515   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:21.929530   20650 fix.go:54] fixHost starting: 
	I1105 10:12:21.929991   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.930022   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.941843   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59011
	I1105 10:12:21.942146   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.942523   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.942539   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.942770   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.942869   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.942962   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.943046   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.943124   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.944238   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.944273   20650 fix.go:112] recreateIfNeeded on ha-213000: state=Stopped err=<nil>
	I1105 10:12:21.944288   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	W1105 10:12:21.944375   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:21.965704   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000" ...
	I1105 10:12:21.986830   20650 main.go:141] libmachine: (ha-213000) Calling .Start
	I1105 10:12:21.986975   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.987000   20650 main.go:141] libmachine: (ha-213000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:12:21.988429   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.988437   20650 main.go:141] libmachine: (ha-213000) DBG | pid 20508 is in state "Stopped"
	I1105 10:12:21.988449   20650 main.go:141] libmachine: (ha-213000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid...
	I1105 10:12:21.988605   20650 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:12:22.098530   20650 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:12:22.098573   20650 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:22.098772   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098813   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098872   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:22.098916   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:22.098942   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:22.100556   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Pid is 20664
	I1105 10:12:22.101143   20650 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:12:22.101159   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:22.101260   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:12:22.103059   20650 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:12:22.103211   20650 main.go:141] libmachine: (ha-213000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:22.103230   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:22.103244   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:22.103282   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:22.103300   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6d37}
	I1105 10:12:22.103320   20650 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:12:22.103326   20650 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:12:22.103333   20650 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:12:22.104301   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:22.104508   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:22.104940   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:22.104951   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:22.105084   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:22.105206   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:22.105334   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105499   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105662   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:22.106057   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:22.106277   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:22.106287   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:22.111841   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:22.167275   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:22.168436   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.168488   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.168505   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.168538   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.563375   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:22.563390   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:22.678087   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.678107   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.678118   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.678127   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.678997   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:22.679010   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:28.419344   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:28.419383   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:28.419395   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:28.443700   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:12:33.165174   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:12:33.165187   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165353   20650 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:12:33.165363   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165462   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.165555   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.165665   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165766   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.166032   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.166168   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.166176   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:12:33.233946   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:12:33.233965   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.234107   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.234213   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234303   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234419   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.234574   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.234722   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.234733   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:12:33.296276   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:12:33.296296   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:12:33.296314   20650 buildroot.go:174] setting up certificates
	I1105 10:12:33.296331   20650 provision.go:84] configureAuth start
	I1105 10:12:33.296340   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.296489   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:33.296589   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.296674   20650 provision.go:143] copyHostCerts
	I1105 10:12:33.296705   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296779   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:12:33.296787   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296976   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:12:33.297202   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297251   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:12:33.297256   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297953   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:12:33.298150   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298199   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:12:33.298205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298290   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:12:33.298468   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:12:33.417814   20650 provision.go:177] copyRemoteCerts
	I1105 10:12:33.417886   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:12:33.417904   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.418044   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.418142   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.418231   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.418333   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:33.452233   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:12:33.452305   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 10:12:33.471837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:12:33.471904   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:12:33.491510   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:12:33.491572   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:12:33.511221   20650 provision.go:87] duration metric: took 214.877215ms to configureAuth
	I1105 10:12:33.511235   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:12:33.511399   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:33.511412   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:33.511554   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.511653   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.511767   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511859   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511944   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.512074   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.512201   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.512209   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:12:33.567448   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:12:33.567460   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:12:33.567540   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:12:33.567552   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.567685   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.567782   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567875   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567957   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.568105   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.568243   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.568289   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:12:33.633746   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:12:33.633770   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.633912   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.634017   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634113   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634221   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.634373   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.634523   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.634538   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:12:35.361033   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:12:35.361047   20650 machine.go:96] duration metric: took 13.256219662s to provisionDockerMachine
	I1105 10:12:35.361058   20650 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:12:35.361081   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:12:35.361095   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.361306   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:12:35.361323   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.361415   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.361506   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.361580   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.361669   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.396970   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:12:35.400946   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:12:35.400961   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:12:35.401074   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:12:35.401496   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:12:35.401503   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:12:35.401766   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:12:35.411536   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:35.443784   20650 start.go:296] duration metric: took 82.704716ms for postStartSetup
	I1105 10:12:35.443806   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.444003   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:12:35.444016   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.444100   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.444180   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.444258   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.444349   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.477407   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:12:35.477482   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:12:35.509435   20650 fix.go:56] duration metric: took 13.580030444s for fixHost
	I1105 10:12:35.509456   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.509592   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.509688   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509776   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.510031   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:35.510178   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:35.510185   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:12:35.565839   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830355.864292832
	
	I1105 10:12:35.565852   20650 fix.go:216] guest clock: 1730830355.864292832
	I1105 10:12:35.565857   20650 fix.go:229] Guest: 2024-11-05 10:12:35.864292832 -0800 PST Remote: 2024-11-05 10:12:35.509447 -0800 PST m=+14.061466364 (delta=354.845832ms)
	I1105 10:12:35.565875   20650 fix.go:200] guest clock delta is within tolerance: 354.845832ms
	I1105 10:12:35.565882   20650 start.go:83] releasing machines lock for "ha-213000", held for 13.636511126s
	I1105 10:12:35.565900   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566049   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:35.566151   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566446   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566554   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566709   20650 ssh_runner.go:195] Run: cat /version.json
	I1105 10:12:35.566721   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.566806   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.566888   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.566979   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567064   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.567357   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:12:35.567386   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.567477   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.567559   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.567637   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567715   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.649786   20650 ssh_runner.go:195] Run: systemctl --version
	I1105 10:12:35.655155   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:12:35.659391   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:12:35.659449   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:12:35.672884   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:12:35.672896   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.672997   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:35.691142   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:12:35.700361   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:12:35.709604   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:12:35.709664   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:12:35.718677   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.727574   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:12:35.736665   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.745463   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:12:35.754435   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:12:35.763449   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:12:35.772263   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:12:35.781386   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:12:35.789651   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:12:35.789704   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:12:35.798805   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:12:35.807011   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:35.912193   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:12:35.927985   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.928078   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:12:35.940041   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.954880   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:12:35.969797   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.981073   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:35.992124   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:12:36.016061   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:36.027432   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:36.042843   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:12:36.045910   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:12:36.054070   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:12:36.067653   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:12:36.164803   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:12:36.262358   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:12:36.262434   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:12:36.276549   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:36.372055   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:12:38.718640   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346585524s)
	I1105 10:12:38.718725   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:12:38.729009   20650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:12:38.741745   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:38.752392   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:12:38.846699   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:12:38.961329   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.072900   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:12:39.086802   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:39.097743   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.205555   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:12:39.272726   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:12:39.273861   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:12:39.278279   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:12:39.278336   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:12:39.281386   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:12:39.307263   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:12:39.307378   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.325423   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.384603   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:12:39.384677   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:39.385383   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:12:39.389204   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.398876   20650 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:12:39.398970   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:39.399044   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.411346   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.411370   20650 docker.go:619] Images already preloaded, skipping extraction
	I1105 10:12:39.411458   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.424491   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.424511   20650 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:12:39.424518   20650 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:12:39.424600   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:12:39.424690   20650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:12:39.458782   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:39.458796   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:39.458807   20650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:12:39.458824   20650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:12:39.458910   20650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:12:39.458922   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:12:39.459000   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:12:39.472063   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:12:39.472130   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:12:39.472197   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:12:39.480694   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:12:39.480761   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:12:39.488010   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:12:39.501448   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:12:39.514699   20650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:12:39.528604   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:12:39.542711   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:12:39.545676   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.555042   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.651842   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:12:39.666232   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:12:39.666245   20650 certs.go:194] generating shared ca certs ...
	I1105 10:12:39.666254   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.666455   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:12:39.666548   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:12:39.666558   20650 certs.go:256] generating profile certs ...
	I1105 10:12:39.666641   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:12:39.666660   20650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b
	I1105 10:12:39.666677   20650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:12:39.768951   20650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b ...
	I1105 10:12:39.768965   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b: {Name:mk94691c5901a2a72a9bc83f127c5282216d457c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.769986   20650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b ...
	I1105 10:12:39.770003   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b: {Name:mk80fa552a8414775a1a2e3534b5be60adeae6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.770739   20650 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:12:39.770972   20650 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:12:39.771252   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:12:39.771262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:12:39.771288   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:12:39.771314   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:12:39.771335   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:12:39.771353   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:12:39.771376   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:12:39.771395   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:12:39.771413   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:12:39.771524   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:12:39.771579   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:12:39.771588   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:12:39.771622   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:12:39.771657   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:12:39.771686   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:12:39.771750   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:39.771787   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:12:39.771817   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:39.771836   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:12:39.772313   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:12:39.799103   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:12:39.823713   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:12:39.848122   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:12:39.876362   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:12:39.898968   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:12:39.924496   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:12:39.975578   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:12:40.017567   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:12:40.062386   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:12:40.134510   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:12:40.170763   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:12:40.196135   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:12:40.201525   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:12:40.214259   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222331   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222400   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.235959   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:12:40.247519   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:12:40.256007   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259529   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259576   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.263770   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:12:40.272126   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:12:40.280328   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283753   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283804   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.288095   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:12:40.296378   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:12:40.300009   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:12:40.304421   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:12:40.309440   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:12:40.314156   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:12:40.318720   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:12:40.323054   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:12:40.327653   20650 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:40.327789   20650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:12:40.338896   20650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:12:40.346426   20650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 10:12:40.346451   20650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 10:12:40.346505   20650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 10:12:40.354659   20650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:12:40.354973   20650 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-213000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355052   20650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19910-17277/kubeconfig needs updating (will repair): [kubeconfig missing "ha-213000" cluster setting kubeconfig missing "ha-213000" context setting]
	I1105 10:12:40.355252   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.355659   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355866   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:12:40.356225   20650 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:12:40.356390   20650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 10:12:40.363779   20650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1105 10:12:40.363792   20650 kubeadm.go:597] duration metric: took 17.337248ms to restartPrimaryControlPlane
	I1105 10:12:40.363798   20650 kubeadm.go:394] duration metric: took 36.151791ms to StartCluster
	I1105 10:12:40.363807   20650 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.363904   20650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.364287   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.364493   20650 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:12:40.364506   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:12:40.364518   20650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:12:40.364641   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.406496   20650 out.go:177] * Enabled addons: 
	I1105 10:12:40.427423   20650 addons.go:510] duration metric: took 62.890869ms for enable addons: enabled=[]
	I1105 10:12:40.427463   20650 start.go:246] waiting for cluster config update ...
	I1105 10:12:40.427476   20650 start.go:255] writing updated cluster config ...
	I1105 10:12:40.449627   20650 out.go:201] 
	I1105 10:12:40.470603   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.470682   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.492690   20650 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:12:40.534643   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:40.534678   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:40.534889   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:40.534908   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:40.535035   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.535960   20650 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:40.536081   20650 start.go:364] duration metric: took 95.311µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:12:40.536107   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:40.536116   20650 fix.go:54] fixHost starting: m02
	I1105 10:12:40.536544   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:40.536591   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:40.548252   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59033
	I1105 10:12:40.548561   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:40.548918   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:40.548932   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:40.549159   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:40.549276   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.549386   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:40.549477   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.549545   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:40.550641   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.550670   20650 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:12:40.550679   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:12:40.550782   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:40.571623   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:12:40.592623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:12:40.592918   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.592966   20650 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:12:40.594491   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.594501   20650 main.go:141] libmachine: (ha-213000-m02) DBG | pid 20524 is in state "Stopped"
	I1105 10:12:40.594516   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:12:40.594967   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:12:40.619713   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:12:40.619737   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:40.619893   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619922   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619952   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:40.619999   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:40.620018   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:40.621465   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Pid is 20673
	I1105 10:12:40.621946   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:12:40.621963   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.622060   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:12:40.623801   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:12:40.623940   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:40.623961   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:12:40.623986   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:40.624000   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:40.624015   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:40.624016   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:12:40.624023   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:12:40.624043   20650 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:12:40.624734   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:12:40.624956   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.625445   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:40.625455   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.625562   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:12:40.625653   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:12:40.625748   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.625874   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.626045   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:12:40.626222   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:40.626362   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:12:40.626369   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:40.631955   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:40.641267   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:40.642527   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:40.642544   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:40.642551   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:40.642561   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.034838   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:41.034853   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:41.149888   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:41.149903   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:41.149911   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:41.149917   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.150684   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:41.150696   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:46.914486   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:46.914552   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:46.914564   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:46.937828   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:15.697814   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:15.697829   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.697958   20650 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:13:15.697969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.698068   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.698166   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.698262   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698349   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698429   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.698590   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.698739   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.698748   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:13:15.770158   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:13:15.770174   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.770319   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.770428   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770526   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.770785   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.770922   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.770933   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:15.838124   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:15.838139   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:15.838159   20650 buildroot.go:174] setting up certificates
	I1105 10:13:15.838166   20650 provision.go:84] configureAuth start
	I1105 10:13:15.838173   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.838309   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:15.838391   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.838477   20650 provision.go:143] copyHostCerts
	I1105 10:13:15.838504   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838551   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:15.838557   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838677   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:15.838892   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.838922   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:15.838926   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.839007   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:15.839169   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839200   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:15.839205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839275   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:15.839440   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:13:15.878682   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:15.878747   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:15.878761   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.878912   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.879015   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.879122   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.879221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:15.916727   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:15.916795   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:15.936280   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:15.936341   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:13:15.956339   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:15.956417   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:15.976131   20650 provision.go:87] duration metric: took 137.957663ms to configureAuth
	I1105 10:13:15.976145   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:15.976324   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:15.976339   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:15.976475   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.976573   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.976661   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976740   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976813   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.976940   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.977065   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.977072   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:16.038725   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:16.038739   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:16.038839   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:16.038851   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.038998   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.039098   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039192   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039283   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.039436   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.039572   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.039618   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:16.112446   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:16.112468   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.112623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.112715   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112811   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112892   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.113049   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.113223   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.113236   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:17.783702   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:17.783717   20650 machine.go:96] duration metric: took 37.158599705s to provisionDockerMachine
	I1105 10:13:17.783726   20650 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:13:17.783733   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:17.783744   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.783939   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:17.783953   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.784616   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.785152   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.785404   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.785500   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.822226   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:17.825293   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:17.825304   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:17.825392   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:17.825532   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:17.825538   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:17.825699   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:17.832977   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:17.852599   20650 start.go:296] duration metric: took 68.865935ms for postStartSetup
	I1105 10:13:17.852645   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.852828   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:17.852840   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.852946   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.853034   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.853111   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.853195   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.891315   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:17.891389   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:17.944504   20650 fix.go:56] duration metric: took 37.408724528s for fixHost
	I1105 10:13:17.944528   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.944681   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.944779   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944880   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944973   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.945125   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:17.945257   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:17.945264   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:18.009463   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830397.963598694
	
	I1105 10:13:18.009476   20650 fix.go:216] guest clock: 1730830397.963598694
	I1105 10:13:18.009482   20650 fix.go:229] Guest: 2024-11-05 10:13:17.963598694 -0800 PST Remote: 2024-11-05 10:13:17.944519 -0800 PST m=+56.496923048 (delta=19.079694ms)
	I1105 10:13:18.009492   20650 fix.go:200] guest clock delta is within tolerance: 19.079694ms
	I1105 10:13:18.009495   20650 start.go:83] releasing machines lock for "ha-213000-m02", held for 37.47374268s
	I1105 10:13:18.009512   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.009649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:18.032281   20650 out.go:177] * Found network options:
	I1105 10:13:18.052088   20650 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:13:18.073014   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.073053   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.073969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074186   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074319   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:13:18.074355   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:13:18.074369   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.074467   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:18.074483   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:18.074488   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074646   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074801   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.074850   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074993   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:18.075008   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.075127   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:13:18.108947   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:18.109027   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:18.155414   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:18.155436   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.155551   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.172114   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:18.180388   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:18.188528   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.188587   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:18.196712   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.204897   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:18.213206   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.221579   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:18.230149   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:18.238366   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:18.246617   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:18.255037   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:18.262631   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:18.262690   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:18.270933   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:18.278375   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.375712   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:18.394397   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.394485   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:18.410636   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.423391   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:18.441876   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.452612   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.462897   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:18.485662   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.495897   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.511009   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:18.513991   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:18.521476   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:18.534868   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:18.632191   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:18.734981   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.735009   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:18.749050   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.853897   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:13:21.134871   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28097554s)
	I1105 10:13:21.134948   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:13:21.146360   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.157264   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:13:21.267741   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:13:21.382285   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.483458   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:13:21.496077   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.506512   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.618640   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:13:21.685448   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:13:21.685559   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:13:21.689888   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:13:21.689958   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:13:21.693059   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:13:21.721401   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:13:21.721489   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.737796   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.775162   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:13:21.818311   20650 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:13:21.839158   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:21.839596   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:13:21.844257   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:21.854347   20650 mustload.go:65] Loading cluster: ha-213000
	I1105 10:13:21.854526   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:21.854763   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.854810   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.866117   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59055
	I1105 10:13:21.866449   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.866785   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.866795   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.867005   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.867094   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:13:21.867180   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:21.867248   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:13:21.868436   20650 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:13:21.868696   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.868721   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.879648   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59057
	I1105 10:13:21.879951   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.880304   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.880326   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.880564   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.880680   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:13:21.880800   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:13:21.880806   20650 certs.go:194] generating shared ca certs ...
	I1105 10:13:21.880817   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:13:21.880976   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:13:21.881033   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:13:21.881041   20650 certs.go:256] generating profile certs ...
	I1105 10:13:21.881133   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:13:21.881677   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:13:21.881747   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:13:21.881756   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:13:21.881777   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:13:21.881800   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:13:21.881819   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:13:21.881837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:13:21.881855   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:13:21.881874   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:13:21.881891   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:13:21.881971   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:13:21.882008   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:13:21.882016   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:13:21.882051   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:13:21.882090   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:13:21.882131   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:13:21.882199   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:21.882240   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:21.882262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:13:21.882285   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:13:21.882314   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:13:21.882395   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:13:21.882480   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:13:21.882563   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:13:21.882639   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:13:21.908416   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:13:21.911559   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:13:21.921605   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:13:21.924753   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:13:21.933495   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:13:21.936611   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:13:21.945312   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:13:21.948273   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:13:21.957659   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:13:21.960739   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:13:21.969191   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:13:21.972356   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:13:21.981306   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:13:22.001469   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:13:22.021181   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:13:22.040587   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:13:22.060078   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:13:22.079285   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:13:22.098538   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:13:22.118296   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:13:22.137769   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:13:22.156929   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:13:22.176353   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:13:22.195510   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:13:22.209194   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:13:22.222827   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:13:22.236546   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:13:22.250070   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:13:22.263444   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:13:22.276970   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:13:22.290700   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:13:22.294935   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:13:22.304164   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307578   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307635   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.311940   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:13:22.320904   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:13:22.329872   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333271   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333318   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.337523   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:13:22.346681   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:13:22.355874   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359764   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359823   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.364168   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:13:22.373288   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:13:22.376713   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:13:22.381681   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:13:22.386495   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:13:22.390985   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:13:22.395318   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:13:22.399578   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:13:22.403998   20650 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:13:22.404052   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:13:22.404067   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:13:22.404115   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:13:22.417096   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:13:22.417139   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:13:22.417203   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:13:22.426058   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:13:22.426117   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:13:22.434774   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:13:22.448444   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:13:22.461910   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:13:22.475772   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:13:22.478602   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:22.487944   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.594180   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.608389   20650 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:13:22.608597   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:22.629533   20650 out.go:177] * Verifying Kubernetes components...
	I1105 10:13:22.671507   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.795219   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.807186   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:13:22.807391   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:13:22.807429   20650 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:13:22.807616   20650 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:22.807698   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:22.807704   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:22.807711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:22.807714   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.750948   20650 round_trippers.go:574] Response Status: 200 OK in 8943 milliseconds
	I1105 10:13:31.752572   20650 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:13:31.752585   20650 node_ready.go:38] duration metric: took 8.945035646s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:31.752614   20650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:31.752661   20650 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:13:31.752671   20650 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:13:31.752720   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:31.752727   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.752733   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.752738   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.802951   20650 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1105 10:13:31.809829   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.809889   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:13:31.809894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.809900   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.809904   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.814415   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:31.815355   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.815363   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.815369   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.815373   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.822380   20650 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:13:31.822662   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.822672   20650 pod_ready.go:82] duration metric: took 12.826683ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822679   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822728   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:13:31.822733   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.822739   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.822744   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.826328   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.826822   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.826831   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.826837   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.826841   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.829860   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.830181   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.830191   20650 pod_ready.go:82] duration metric: took 7.507226ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830198   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830235   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:13:31.830240   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.830245   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.830252   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.832219   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:31.832697   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.832706   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.832711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.832715   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.835276   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.835692   20650 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.835701   20650 pod_ready.go:82] duration metric: took 5.498306ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835709   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835747   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:13:31.835752   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.835758   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.835762   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.841537   20650 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:13:31.841973   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:31.841981   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.841986   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.841990   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844531   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.844869   20650 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.844879   20650 pod_ready.go:82] duration metric: took 9.164525ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844885   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844921   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:13:31.844926   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.844931   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844936   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.848600   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.952821   20650 request.go:632] Waited for 103.696334ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952860   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952865   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.952873   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.952877   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.955043   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:31.955226   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955236   20650 pod_ready.go:82] duration metric: took 110.346207ms for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:31.955242   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955257   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.153855   20650 request.go:632] Waited for 198.56381ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153901   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153906   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.153912   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.153915   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.156326   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.354721   20650 request.go:632] Waited for 197.883079ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354800   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354808   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.354816   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.354821   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.357314   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.357758   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.357771   20650 pod_ready.go:82] duration metric: took 402.50745ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.357779   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.554904   20650 request.go:632] Waited for 197.060501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555009   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555040   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.555059   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.555071   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.562819   20650 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 10:13:32.752788   20650 request.go:632] Waited for 189.599558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752820   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752825   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.752864   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.752870   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.755075   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.755378   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.755387   20650 pod_ready.go:82] duration metric: took 397.605979ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.755394   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.952787   20650 request.go:632] Waited for 197.357502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952836   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952842   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.952853   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.955636   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.153249   20650 request.go:632] Waited for 196.999871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153317   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153323   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.153329   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.153334   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.155712   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:33.155782   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155797   20650 pod_ready.go:82] duration metric: took 400.400564ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:33.155804   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155810   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.353944   20650 request.go:632] Waited for 198.075152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354021   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354033   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.354041   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.354047   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.356715   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.553130   20650 request.go:632] Waited for 196.01942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553198   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553204   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.553237   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.553242   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.555527   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.555890   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.555899   20650 pod_ready.go:82] duration metric: took 400.086552ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.555906   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.752845   20650 request.go:632] Waited for 196.894857ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752909   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752915   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.752921   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.752925   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.754805   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.953311   20650 request.go:632] Waited for 197.807461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953353   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953381   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.953389   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.953392   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.955376   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.955836   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.955846   20650 pod_ready.go:82] duration metric: took 399.938695ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.955855   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.153021   20650 request.go:632] Waited for 197.093812ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153060   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153065   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.153072   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.153075   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.155546   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:34.353423   20650 request.go:632] Waited for 197.340662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353457   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353463   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.353469   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.353472   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.355383   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.355495   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355514   20650 pod_ready.go:82] duration metric: took 399.657027ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.355524   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355532   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.553620   20650 request.go:632] Waited for 198.034445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553677   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553683   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.553689   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.553694   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.555564   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:34.753369   20650 request.go:632] Waited for 197.394131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753424   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753431   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.753436   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.753440   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.755363   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.755426   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755436   20650 pod_ready.go:82] duration metric: took 399.890345ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.755442   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755446   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.953531   20650 request.go:632] Waited for 198.038372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953615   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953624   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.953631   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.953636   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.955951   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.153813   20650 request.go:632] Waited for 196.981939ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153879   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.153903   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.153910   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.156466   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.157099   20650 pod_ready.go:93] pod "kube-proxy-m45pk" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.157109   20650 pod_ready.go:82] duration metric: took 401.65588ms for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.157117   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.354248   20650 request.go:632] Waited for 197.082179ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354294   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354302   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.354340   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.354347   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.357098   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.552778   20650 request.go:632] Waited for 195.237923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552882   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552910   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.552918   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.552923   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.555242   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.555725   20650 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.555734   20650 pod_ready.go:82] duration metric: took 398.615884ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.555748   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.752802   20650 request.go:632] Waited for 196.982082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752849   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752855   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.752861   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.752865   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.755216   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.953665   20650 request.go:632] Waited for 197.923503ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953733   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953742   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.953751   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.953758   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.955875   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.956268   20650 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.956277   20650 pod_ready.go:82] duration metric: took 400.526917ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.956283   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.153409   20650 request.go:632] Waited for 197.086533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153486   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153496   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.153504   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.153513   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.156474   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.354367   20650 request.go:632] Waited for 197.602225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354401   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354406   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.354421   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.354441   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.356601   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.356994   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.357004   20650 pod_ready.go:82] duration metric: took 400.718541ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.357011   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.554145   20650 request.go:632] Waited for 197.038016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554243   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554252   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.554264   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.554270   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.556774   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.753404   20650 request.go:632] Waited for 196.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753437   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753442   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.753448   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.753452   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.756764   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:36.757112   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.757122   20650 pod_ready.go:82] duration metric: took 400.109512ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.757130   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.953514   20650 request.go:632] Waited for 196.347448ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953546   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953558   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.953565   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.953575   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.955940   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.154619   20650 request.go:632] Waited for 198.194145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154663   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154669   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.154676   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.154695   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.157438   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:37.157524   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157535   20650 pod_ready.go:82] duration metric: took 400.40261ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:37.157542   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157547   20650 pod_ready.go:39] duration metric: took 5.404967892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:37.157569   20650 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:13:37.157646   20650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:13:37.171805   20650 api_server.go:72] duration metric: took 14.563521484s to wait for apiserver process to appear ...
	I1105 10:13:37.171821   20650 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:13:37.171836   20650 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:13:37.176463   20650 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:13:37.176507   20650 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:13:37.176512   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.176518   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.176523   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.177377   20650 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:13:37.177442   20650 api_server.go:141] control plane version: v1.31.2
	I1105 10:13:37.177460   20650 api_server.go:131] duration metric: took 5.62791ms to wait for apiserver health ...
	I1105 10:13:37.177467   20650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:13:37.352914   20650 request.go:632] Waited for 175.404088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352969   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352975   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.352982   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.352986   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.357439   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.362936   20650 system_pods.go:59] 26 kube-system pods found
	I1105 10:13:37.362960   20650 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.362964   20650 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.362968   20650 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.362973   20650 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.362980   20650 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.362984   20650 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.362987   20650 system_pods.go:61] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.362993   20650 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.362999   20650 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.363003   20650 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.363007   20650 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.363013   20650 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.363016   20650 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.363021   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.363024   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.363027   20650 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.363030   20650 system_pods.go:61] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.363036   20650 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 10:13:37.363040   20650 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.363042   20650 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.363046   20650 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.363051   20650 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.363055   20650 system_pods.go:61] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.363059   20650 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.363063   20650 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.363065   20650 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.363070   20650 system_pods.go:74] duration metric: took 185.599377ms to wait for pod list to return data ...
	I1105 10:13:37.363076   20650 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:13:37.554093   20650 request.go:632] Waited for 190.967335ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554130   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554138   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.554152   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.554156   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.557460   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:37.557594   20650 default_sa.go:45] found service account: "default"
	I1105 10:13:37.557604   20650 default_sa.go:55] duration metric: took 194.526347ms for default service account to be created ...
	I1105 10:13:37.557612   20650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:13:37.752842   20650 request.go:632] Waited for 195.185977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752875   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752881   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.752902   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.752907   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.757021   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.762493   20650 system_pods.go:86] 26 kube-system pods found
	I1105 10:13:37.762505   20650 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.762509   20650 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.762512   20650 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.762517   20650 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.762521   20650 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.762525   20650 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.762528   20650 system_pods.go:89] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.762532   20650 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.762535   20650 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.762539   20650 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.762543   20650 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.762548   20650 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.762551   20650 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.762557   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.762561   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.762566   20650 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.762569   20650 system_pods.go:89] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.762572   20650 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:13:37.762575   20650 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.762578   20650 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.762583   20650 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.762590   20650 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.762594   20650 system_pods.go:89] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.762596   20650 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.762600   20650 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.762602   20650 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.762607   20650 system_pods.go:126] duration metric: took 204.991818ms to wait for k8s-apps to be running ...
	I1105 10:13:37.762614   20650 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:13:37.762682   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:13:37.777110   20650 system_svc.go:56] duration metric: took 14.491738ms WaitForService to wait for kubelet
	I1105 10:13:37.777127   20650 kubeadm.go:582] duration metric: took 15.16885159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:13:37.777138   20650 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:13:37.952770   20650 request.go:632] Waited for 175.557407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952816   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952827   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.952839   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.955592   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.956364   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956379   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956390   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956393   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956397   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956399   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956403   20650 node_conditions.go:105] duration metric: took 179.263041ms to run NodePressure ...
	I1105 10:13:37.956411   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:13:37.956426   20650 start.go:255] writing updated cluster config ...
	I1105 10:13:37.978800   20650 out.go:201] 
	I1105 10:13:38.000237   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:38.000353   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.022912   20650 out.go:177] * Starting "ha-213000-m04" worker node in "ha-213000" cluster
	I1105 10:13:38.065816   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:13:38.065838   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:13:38.065942   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:13:38.065952   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:13:38.066024   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.066548   20650 start.go:360] acquireMachinesLock for ha-213000-m04: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:13:38.066601   20650 start.go:364] duration metric: took 39.836µs to acquireMachinesLock for "ha-213000-m04"
	I1105 10:13:38.066614   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:13:38.066619   20650 fix.go:54] fixHost starting: m04
	I1105 10:13:38.066839   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:38.066859   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:38.078183   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59062
	I1105 10:13:38.078511   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:38.078858   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:38.078877   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:38.079111   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:38.079203   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.079308   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:13:38.079392   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.079457   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:13:38.080557   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.080601   20650 fix.go:112] recreateIfNeeded on ha-213000-m04: state=Stopped err=<nil>
	I1105 10:13:38.080610   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	W1105 10:13:38.080695   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:13:38.101909   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m04" ...
	I1105 10:13:38.150121   20650 main.go:141] libmachine: (ha-213000-m04) Calling .Start
	I1105 10:13:38.150270   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.150297   20650 main.go:141] libmachine: (ha-213000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid
	I1105 10:13:38.151495   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.151504   20650 main.go:141] libmachine: (ha-213000-m04) DBG | pid 20571 is in state "Stopped"
	I1105 10:13:38.151536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid...
	I1105 10:13:38.151981   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Using UUID 70721578-92b7-4edc-935c-43ebcacd790c
	I1105 10:13:38.175524   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Generated MAC 1a:a3:f2:a5:2e:39
	I1105 10:13:38.175551   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:13:38.175756   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175805   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175883   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70721578-92b7-4edc-935c-43ebcacd790c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:13:38.175929   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70721578-92b7-4edc-935c-43ebcacd790c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:13:38.175943   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:13:38.177358   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Pid is 20690
	I1105 10:13:38.177760   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Attempt 0
	I1105 10:13:38.177775   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.177790   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:13:38.179817   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Searching for 1a:a3:f2:a5:2e:39 in /var/db/dhcpd_leases ...
	I1105 10:13:38.179881   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:13:38.179891   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:13:38.179930   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:13:38.179944   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:13:38.179961   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:13:38.179966   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found match: 1a:a3:f2:a5:2e:39
	I1105 10:13:38.179974   20650 main.go:141] libmachine: (ha-213000-m04) DBG | IP: 192.169.0.8
	I1105 10:13:38.180001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetConfigRaw
	I1105 10:13:38.180736   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:38.180968   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.181459   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:13:38.181471   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.181605   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:38.181707   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:38.181828   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.181929   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.182026   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:38.182165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:38.182315   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:38.182325   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:13:38.188897   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:13:38.198428   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:13:38.199856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.199886   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.199916   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.199953   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.594841   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:13:38.594856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:13:38.709716   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.709736   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.709743   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.709759   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.710592   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:13:38.710604   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:13:44.475519   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:13:44.475536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:13:44.475546   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:13:44.498793   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:49.237329   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:49.237349   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237488   20650 buildroot.go:166] provisioning hostname "ha-213000-m04"
	I1105 10:13:49.237500   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237590   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.237684   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.237765   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237842   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237935   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.238078   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.238220   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.238229   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m04 && echo "ha-213000-m04" | sudo tee /etc/hostname
	I1105 10:13:49.297417   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m04
	
	I1105 10:13:49.297437   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.297576   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.297673   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297757   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297853   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.297997   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.298162   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.298173   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:49.354308   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:49.354323   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:49.354341   20650 buildroot.go:174] setting up certificates
	I1105 10:13:49.354349   20650 provision.go:84] configureAuth start
	I1105 10:13:49.354357   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.354507   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:49.354606   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.354711   20650 provision.go:143] copyHostCerts
	I1105 10:13:49.354741   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354793   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:49.354799   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354909   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:49.355124   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355155   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:49.355159   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355228   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:49.355419   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355454   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:49.355461   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355528   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:49.355690   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m04 san=[127.0.0.1 192.169.0.8 ha-213000-m04 localhost minikube]
	I1105 10:13:49.396705   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:49.396767   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:49.396780   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.396910   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.397015   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.397117   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.397221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:49.427813   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:49.427885   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:49.447457   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:49.447518   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:49.467286   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:49.467359   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:13:49.487192   20650 provision.go:87] duration metric: took 132.83626ms to configureAuth
	I1105 10:13:49.487209   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:49.487380   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:49.487394   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:49.487531   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.487631   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.487715   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487801   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487890   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.488033   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.488154   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.488162   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:49.537465   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:49.537478   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:49.537561   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:49.537571   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.537704   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.537799   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537884   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537998   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.538165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.538298   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.538345   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:49.598479   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:49.598502   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.598649   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.598747   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598833   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598947   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.599089   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.599234   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.599246   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:51.207763   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:51.207782   20650 machine.go:96] duration metric: took 13.026432223s to provisionDockerMachine
	I1105 10:13:51.207792   20650 start.go:293] postStartSetup for "ha-213000-m04" (driver="hyperkit")
	I1105 10:13:51.207801   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:51.207816   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.208031   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:51.208047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.208140   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.208231   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.208318   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.208438   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.241123   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:51.244240   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:51.244251   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:51.244336   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:51.244477   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:51.244484   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:51.244646   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:51.252753   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:51.271782   20650 start.go:296] duration metric: took 63.980744ms for postStartSetup
	I1105 10:13:51.271803   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.271989   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:51.272001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.272093   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.272178   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.272277   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.272371   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.304392   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:51.304469   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:51.358605   20650 fix.go:56] duration metric: took 13.292102469s for fixHost
	I1105 10:13:51.358630   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.358783   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.358880   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.358963   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.359053   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.359195   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:51.359329   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:51.359336   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:51.407868   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830431.709090009
	
	I1105 10:13:51.407885   20650 fix.go:216] guest clock: 1730830431.709090009
	I1105 10:13:51.407890   20650 fix.go:229] Guest: 2024-11-05 10:13:51.709090009 -0800 PST Remote: 2024-11-05 10:13:51.35862 -0800 PST m=+89.911326584 (delta=350.470009ms)
	I1105 10:13:51.407901   20650 fix.go:200] guest clock delta is within tolerance: 350.470009ms
	I1105 10:13:51.407906   20650 start.go:83] releasing machines lock for "ha-213000-m04", held for 13.34141889s
	I1105 10:13:51.407923   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.408055   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:51.430524   20650 out.go:177] * Found network options:
	I1105 10:13:51.451633   20650 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:13:51.472140   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.472164   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.472179   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472739   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472888   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.473015   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1105 10:13:51.473025   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.473039   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.473047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473124   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:51.473137   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473175   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473286   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473299   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473387   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473400   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473487   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.473517   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473599   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	W1105 10:13:51.501432   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:51.501515   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:51.553972   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:51.553993   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.554083   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.569365   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:51.577607   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:51.586014   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:51.586084   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:51.594293   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.602646   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:51.610969   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.619400   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:51.627741   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:51.635982   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:51.645401   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:51.653565   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:51.660899   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:51.660963   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:51.669419   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:51.677143   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:51.772664   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:51.792178   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.792270   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:51.808083   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.820868   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:51.842221   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.854583   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.865539   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:51.892869   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.904042   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.922494   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:51.928520   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:51.945780   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:51.962437   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:52.060460   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:52.163232   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:52.163260   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:52.178328   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:52.296397   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:14:53.349067   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016016812s)
	I1105 10:14:53.349159   20650 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:14:53.385876   20650 out.go:201] 
	W1105 10:14:53.422606   20650 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:14:53.422674   20650 out.go:270] * 
	* 
	W1105 10:14:53.423462   20650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:14:53.533703   20650 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-213000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (3.498466466s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000 -v=7                                                                                                       | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-213000 -v=7                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:08 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST |                     |
	| node    | ha-213000 node delete m03 -v=7                                                                                               | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-213000 stop -v=7                                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:12 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true                                                                                                     | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:12 PST |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:12:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:12:21.490688   20650 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.490996   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491002   20650 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.491006   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491183   20650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.492670   20650 out.go:352] Setting JSON to false
	I1105 10:12:21.523908   20650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7910,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:12:21.523997   20650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:12:21.546247   20650 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:12:21.588131   20650 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:12:21.588174   20650 notify.go:220] Checking for updates...
	I1105 10:12:21.632868   20650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:21.654057   20650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:12:21.674788   20650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:12:21.696036   20650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:12:21.717022   20650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:12:21.738560   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.739289   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.739362   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.752070   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59007
	I1105 10:12:21.752427   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.752834   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.752843   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.753115   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.753236   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.753425   20650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:12:21.753684   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.753710   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.764480   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59009
	I1105 10:12:21.764817   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.765142   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.765158   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.765399   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.765513   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.796815   20650 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:12:21.838784   20650 start.go:297] selected driver: hyperkit
	I1105 10:12:21.838816   20650 start.go:901] validating driver "hyperkit" against &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.839082   20650 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:12:21.839288   20650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.839546   20650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:12:21.851704   20650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:12:21.858679   20650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.858708   20650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:12:21.864360   20650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:12:21.864394   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:21.864431   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:21.864510   20650 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.864624   20650 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.886086   20650 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:12:21.927848   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:21.927921   20650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:12:21.927965   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:21.928204   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:21.928223   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:21.928393   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:21.929303   20650 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:21.929483   20650 start.go:364] duration metric: took 156.606µs to acquireMachinesLock for "ha-213000"
	I1105 10:12:21.929515   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:21.929530   20650 fix.go:54] fixHost starting: 
	I1105 10:12:21.929991   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.930022   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.941843   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59011
	I1105 10:12:21.942146   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.942523   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.942539   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.942770   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.942869   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.942962   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.943046   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.943124   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.944238   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.944273   20650 fix.go:112] recreateIfNeeded on ha-213000: state=Stopped err=<nil>
	I1105 10:12:21.944288   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	W1105 10:12:21.944375   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:21.965704   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000" ...
	I1105 10:12:21.986830   20650 main.go:141] libmachine: (ha-213000) Calling .Start
	I1105 10:12:21.986975   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.987000   20650 main.go:141] libmachine: (ha-213000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:12:21.988429   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.988437   20650 main.go:141] libmachine: (ha-213000) DBG | pid 20508 is in state "Stopped"
	I1105 10:12:21.988449   20650 main.go:141] libmachine: (ha-213000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid...
	I1105 10:12:21.988605   20650 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:12:22.098530   20650 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:12:22.098573   20650 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:22.098772   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098813   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098872   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:22.098916   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:22.098942   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:22.100556   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Pid is 20664
	I1105 10:12:22.101143   20650 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:12:22.101159   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:22.101260   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:12:22.103059   20650 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:12:22.103211   20650 main.go:141] libmachine: (ha-213000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:22.103230   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:22.103244   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:22.103282   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:22.103300   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6d37}
	I1105 10:12:22.103320   20650 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:12:22.103326   20650 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:12:22.103333   20650 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:12:22.104301   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:22.104508   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:22.104940   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:22.104951   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:22.105084   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:22.105206   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:22.105334   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105499   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105662   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:22.106057   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:22.106277   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:22.106287   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:22.111841   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:22.167275   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:22.168436   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.168488   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.168505   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.168538   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.563375   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:22.563390   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:22.678087   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.678107   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.678118   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.678127   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.678997   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:22.679010   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:28.419344   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:28.419383   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:28.419395   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:28.443700   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:12:33.165174   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:12:33.165187   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165353   20650 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:12:33.165363   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165462   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.165555   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.165665   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165766   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.166032   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.166168   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.166176   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:12:33.233946   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:12:33.233965   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.234107   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.234213   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234303   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234419   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.234574   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.234722   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.234733   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:12:33.296276   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:12:33.296296   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:12:33.296314   20650 buildroot.go:174] setting up certificates
	I1105 10:12:33.296331   20650 provision.go:84] configureAuth start
	I1105 10:12:33.296340   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.296489   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:33.296589   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.296674   20650 provision.go:143] copyHostCerts
	I1105 10:12:33.296705   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296779   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:12:33.296787   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296976   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:12:33.297202   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297251   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:12:33.297256   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297953   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:12:33.298150   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298199   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:12:33.298205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298290   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:12:33.298468   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:12:33.417814   20650 provision.go:177] copyRemoteCerts
	I1105 10:12:33.417886   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:12:33.417904   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.418044   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.418142   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.418231   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.418333   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:33.452233   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:12:33.452305   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 10:12:33.471837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:12:33.471904   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:12:33.491510   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:12:33.491572   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:12:33.511221   20650 provision.go:87] duration metric: took 214.877215ms to configureAuth
	I1105 10:12:33.511235   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:12:33.511399   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:33.511412   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:33.511554   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.511653   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.511767   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511859   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511944   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.512074   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.512201   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.512209   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:12:33.567448   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:12:33.567460   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:12:33.567540   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:12:33.567552   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.567685   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.567782   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567875   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567957   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.568105   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.568243   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.568289   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:12:33.633746   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:12:33.633770   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.633912   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.634017   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634113   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634221   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.634373   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.634523   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.634538   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:12:35.361033   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:12:35.361047   20650 machine.go:96] duration metric: took 13.256219662s to provisionDockerMachine
	I1105 10:12:35.361058   20650 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:12:35.361081   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:12:35.361095   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.361306   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:12:35.361323   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.361415   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.361506   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.361580   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.361669   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.396970   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:12:35.400946   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:12:35.400961   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:12:35.401074   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:12:35.401496   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:12:35.401503   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:12:35.401766   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:12:35.411536   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:35.443784   20650 start.go:296] duration metric: took 82.704716ms for postStartSetup
	I1105 10:12:35.443806   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.444003   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:12:35.444016   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.444100   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.444180   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.444258   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.444349   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.477407   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:12:35.477482   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:12:35.509435   20650 fix.go:56] duration metric: took 13.580030444s for fixHost
	I1105 10:12:35.509456   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.509592   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.509688   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509776   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.510031   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:35.510178   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:35.510185   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:12:35.565839   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830355.864292832
	
	I1105 10:12:35.565852   20650 fix.go:216] guest clock: 1730830355.864292832
	I1105 10:12:35.565857   20650 fix.go:229] Guest: 2024-11-05 10:12:35.864292832 -0800 PST Remote: 2024-11-05 10:12:35.509447 -0800 PST m=+14.061466364 (delta=354.845832ms)
	I1105 10:12:35.565875   20650 fix.go:200] guest clock delta is within tolerance: 354.845832ms
	I1105 10:12:35.565882   20650 start.go:83] releasing machines lock for "ha-213000", held for 13.636511126s
	I1105 10:12:35.565900   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566049   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:35.566151   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566446   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566554   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566709   20650 ssh_runner.go:195] Run: cat /version.json
	I1105 10:12:35.566721   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.566806   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.566888   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.566979   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567064   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.567357   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:12:35.567386   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.567477   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.567559   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.567637   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567715   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.649786   20650 ssh_runner.go:195] Run: systemctl --version
	I1105 10:12:35.655155   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:12:35.659391   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:12:35.659449   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:12:35.672884   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:12:35.672896   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.672997   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:35.691142   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:12:35.700361   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:12:35.709604   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:12:35.709664   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:12:35.718677   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.727574   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:12:35.736665   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.745463   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:12:35.754435   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:12:35.763449   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:12:35.772263   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:12:35.781386   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:12:35.789651   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:12:35.789704   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:12:35.798805   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:12:35.807011   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:35.912193   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:12:35.927985   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.928078   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:12:35.940041   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.954880   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:12:35.969797   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.981073   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:35.992124   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:12:36.016061   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:36.027432   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:36.042843   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:12:36.045910   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:12:36.054070   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:12:36.067653   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:12:36.164803   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:12:36.262358   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:12:36.262434   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:12:36.276549   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:36.372055   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:12:38.718640   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346585524s)
	I1105 10:12:38.718725   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:12:38.729009   20650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:12:38.741745   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:38.752392   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:12:38.846699   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:12:38.961329   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.072900   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:12:39.086802   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:39.097743   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.205555   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:12:39.272726   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:12:39.273861   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:12:39.278279   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:12:39.278336   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:12:39.281386   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:12:39.307263   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:12:39.307378   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.325423   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.384603   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:12:39.384677   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:39.385383   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:12:39.389204   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.398876   20650 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:12:39.398970   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:39.399044   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.411346   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.411370   20650 docker.go:619] Images already preloaded, skipping extraction
	I1105 10:12:39.411458   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.424491   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.424511   20650 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:12:39.424518   20650 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:12:39.424600   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:12:39.424690   20650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:12:39.458782   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:39.458796   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:39.458807   20650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:12:39.458824   20650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:12:39.458910   20650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:12:39.458922   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:12:39.459000   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:12:39.472063   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:12:39.472130   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:12:39.472197   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:12:39.480694   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:12:39.480761   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:12:39.488010   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:12:39.501448   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:12:39.514699   20650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:12:39.528604   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:12:39.542711   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:12:39.545676   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.555042   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.651842   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:12:39.666232   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:12:39.666245   20650 certs.go:194] generating shared ca certs ...
	I1105 10:12:39.666254   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.666455   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:12:39.666548   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:12:39.666558   20650 certs.go:256] generating profile certs ...
	I1105 10:12:39.666641   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:12:39.666660   20650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b
	I1105 10:12:39.666677   20650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:12:39.768951   20650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b ...
	I1105 10:12:39.768965   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b: {Name:mk94691c5901a2a72a9bc83f127c5282216d457c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.769986   20650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b ...
	I1105 10:12:39.770003   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b: {Name:mk80fa552a8414775a1a2e3534b5be60adeae6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.770739   20650 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:12:39.770972   20650 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:12:39.771252   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:12:39.771262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:12:39.771288   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:12:39.771314   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:12:39.771335   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:12:39.771353   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:12:39.771376   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:12:39.771395   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:12:39.771413   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:12:39.771524   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:12:39.771579   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:12:39.771588   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:12:39.771622   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:12:39.771657   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:12:39.771686   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:12:39.771750   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:39.771787   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:12:39.771817   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:39.771836   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:12:39.772313   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:12:39.799103   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:12:39.823713   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:12:39.848122   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:12:39.876362   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:12:39.898968   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:12:39.924496   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:12:39.975578   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:12:40.017567   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:12:40.062386   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:12:40.134510   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:12:40.170763   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:12:40.196135   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:12:40.201525   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:12:40.214259   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222331   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222400   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.235959   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:12:40.247519   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:12:40.256007   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259529   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259576   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.263770   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:12:40.272126   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:12:40.280328   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283753   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283804   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.288095   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:12:40.296378   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:12:40.300009   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:12:40.304421   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:12:40.309440   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:12:40.314156   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:12:40.318720   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:12:40.323054   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:12:40.327653   20650 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:40.327789   20650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:12:40.338896   20650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:12:40.346426   20650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 10:12:40.346451   20650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 10:12:40.346505   20650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 10:12:40.354659   20650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:12:40.354973   20650 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-213000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355052   20650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19910-17277/kubeconfig needs updating (will repair): [kubeconfig missing "ha-213000" cluster setting kubeconfig missing "ha-213000" context setting]
	I1105 10:12:40.355252   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.355659   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355866   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:12:40.356225   20650 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:12:40.356390   20650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 10:12:40.363779   20650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1105 10:12:40.363792   20650 kubeadm.go:597] duration metric: took 17.337248ms to restartPrimaryControlPlane
	I1105 10:12:40.363798   20650 kubeadm.go:394] duration metric: took 36.151791ms to StartCluster
	I1105 10:12:40.363807   20650 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.363904   20650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.364287   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.364493   20650 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:12:40.364506   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:12:40.364518   20650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:12:40.364641   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.406496   20650 out.go:177] * Enabled addons: 
	I1105 10:12:40.427423   20650 addons.go:510] duration metric: took 62.890869ms for enable addons: enabled=[]
	I1105 10:12:40.427463   20650 start.go:246] waiting for cluster config update ...
	I1105 10:12:40.427476   20650 start.go:255] writing updated cluster config ...
	I1105 10:12:40.449627   20650 out.go:201] 
	I1105 10:12:40.470603   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.470682   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.492690   20650 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:12:40.534643   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:40.534678   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:40.534889   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:40.534908   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:40.535035   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.535960   20650 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:40.536081   20650 start.go:364] duration metric: took 95.311µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:12:40.536107   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:40.536116   20650 fix.go:54] fixHost starting: m02
	I1105 10:12:40.536544   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:40.536591   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:40.548252   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59033
	I1105 10:12:40.548561   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:40.548918   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:40.548932   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:40.549159   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:40.549276   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.549386   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:40.549477   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.549545   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:40.550641   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.550670   20650 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:12:40.550679   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:12:40.550782   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:40.571623   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:12:40.592623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:12:40.592918   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.592966   20650 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:12:40.594491   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.594501   20650 main.go:141] libmachine: (ha-213000-m02) DBG | pid 20524 is in state "Stopped"
	I1105 10:12:40.594516   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:12:40.594967   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:12:40.619713   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:12:40.619737   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:40.619893   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619922   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619952   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:40.619999   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:40.620018   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:40.621465   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Pid is 20673
	I1105 10:12:40.621946   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:12:40.621963   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.622060   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:12:40.623801   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:12:40.623940   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:40.623961   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:12:40.623986   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:40.624000   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:40.624015   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:40.624016   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:12:40.624023   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:12:40.624043   20650 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:12:40.624734   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:12:40.624956   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.625445   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:40.625455   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.625562   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:12:40.625653   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:12:40.625748   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.625874   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.626045   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:12:40.626222   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:40.626362   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:12:40.626369   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:40.631955   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:40.641267   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:40.642527   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:40.642544   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:40.642551   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:40.642561   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.034838   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:41.034853   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:41.149888   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:41.149903   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:41.149911   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:41.149917   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.150684   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:41.150696   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:46.914486   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:46.914552   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:46.914564   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:46.937828   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:15.697814   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:15.697829   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.697958   20650 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:13:15.697969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.698068   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.698166   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.698262   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698349   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698429   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.698590   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.698739   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.698748   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:13:15.770158   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:13:15.770174   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.770319   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.770428   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770526   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.770785   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.770922   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.770933   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:15.838124   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:15.838139   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:15.838159   20650 buildroot.go:174] setting up certificates
	I1105 10:13:15.838166   20650 provision.go:84] configureAuth start
	I1105 10:13:15.838173   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.838309   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:15.838391   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.838477   20650 provision.go:143] copyHostCerts
	I1105 10:13:15.838504   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838551   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:15.838557   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838677   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:15.838892   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.838922   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:15.838926   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.839007   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:15.839169   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839200   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:15.839205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839275   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:15.839440   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:13:15.878682   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:15.878747   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:15.878761   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.878912   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.879015   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.879122   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.879221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:15.916727   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:15.916795   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:15.936280   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:15.936341   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:13:15.956339   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:15.956417   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:15.976131   20650 provision.go:87] duration metric: took 137.957663ms to configureAuth
	I1105 10:13:15.976145   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:15.976324   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:15.976339   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:15.976475   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.976573   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.976661   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976740   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976813   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.976940   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.977065   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.977072   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:16.038725   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:16.038739   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:16.038839   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:16.038851   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.038998   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.039098   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039192   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039283   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.039436   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.039572   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.039618   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:16.112446   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:16.112468   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.112623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.112715   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112811   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112892   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.113049   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.113223   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.113236   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:17.783702   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:17.783717   20650 machine.go:96] duration metric: took 37.158599705s to provisionDockerMachine
	I1105 10:13:17.783726   20650 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:13:17.783733   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:17.783744   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.783939   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:17.783953   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.784616   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.785152   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.785404   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.785500   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.822226   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:17.825293   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:17.825304   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:17.825392   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:17.825532   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:17.825538   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:17.825699   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:17.832977   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:17.852599   20650 start.go:296] duration metric: took 68.865935ms for postStartSetup
	I1105 10:13:17.852645   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.852828   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:17.852840   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.852946   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.853034   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.853111   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.853195   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.891315   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:17.891389   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:17.944504   20650 fix.go:56] duration metric: took 37.408724528s for fixHost
	I1105 10:13:17.944528   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.944681   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.944779   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944880   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944973   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.945125   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:17.945257   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:17.945264   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:18.009463   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830397.963598694
	
	I1105 10:13:18.009476   20650 fix.go:216] guest clock: 1730830397.963598694
	I1105 10:13:18.009482   20650 fix.go:229] Guest: 2024-11-05 10:13:17.963598694 -0800 PST Remote: 2024-11-05 10:13:17.944519 -0800 PST m=+56.496923048 (delta=19.079694ms)
	I1105 10:13:18.009492   20650 fix.go:200] guest clock delta is within tolerance: 19.079694ms
	I1105 10:13:18.009495   20650 start.go:83] releasing machines lock for "ha-213000-m02", held for 37.47374268s
	I1105 10:13:18.009512   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.009649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:18.032281   20650 out.go:177] * Found network options:
	I1105 10:13:18.052088   20650 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:13:18.073014   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.073053   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.073969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074186   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074319   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:13:18.074355   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:13:18.074369   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.074467   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:18.074483   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:18.074488   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074646   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074801   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.074850   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074993   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:18.075008   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.075127   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:13:18.108947   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:18.109027   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:18.155414   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:18.155436   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.155551   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.172114   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:18.180388   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:18.188528   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.188587   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:18.196712   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.204897   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:18.213206   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.221579   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:18.230149   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:18.238366   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:18.246617   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:18.255037   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:18.262631   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:18.262690   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:18.270933   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:18.278375   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.375712   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:18.394397   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.394485   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:18.410636   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.423391   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:18.441876   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.452612   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.462897   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:18.485662   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.495897   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.511009   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:18.513991   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:18.521476   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:18.534868   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:18.632191   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:18.734981   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.735009   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:18.749050   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.853897   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:13:21.134871   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28097554s)
	I1105 10:13:21.134948   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:13:21.146360   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.157264   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:13:21.267741   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:13:21.382285   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.483458   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:13:21.496077   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.506512   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.618640   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:13:21.685448   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:13:21.685559   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:13:21.689888   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:13:21.689958   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:13:21.693059   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:13:21.721401   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:13:21.721489   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.737796   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.775162   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:13:21.818311   20650 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:13:21.839158   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:21.839596   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:13:21.844257   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:21.854347   20650 mustload.go:65] Loading cluster: ha-213000
	I1105 10:13:21.854526   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:21.854763   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.854810   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.866117   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59055
	I1105 10:13:21.866449   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.866785   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.866795   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.867005   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.867094   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:13:21.867180   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:21.867248   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:13:21.868436   20650 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:13:21.868696   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.868721   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.879648   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59057
	I1105 10:13:21.879951   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.880304   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.880326   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.880564   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.880680   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:13:21.880800   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:13:21.880806   20650 certs.go:194] generating shared ca certs ...
	I1105 10:13:21.880817   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:13:21.880976   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:13:21.881033   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:13:21.881041   20650 certs.go:256] generating profile certs ...
	I1105 10:13:21.881133   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:13:21.881677   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:13:21.881747   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:13:21.881756   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:13:21.881777   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:13:21.881800   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:13:21.881819   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:13:21.881837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:13:21.881855   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:13:21.881874   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:13:21.881891   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:13:21.881971   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:13:21.882008   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:13:21.882016   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:13:21.882051   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:13:21.882090   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:13:21.882131   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:13:21.882199   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:21.882240   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:21.882262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:13:21.882285   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:13:21.882314   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:13:21.882395   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:13:21.882480   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:13:21.882563   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:13:21.882639   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:13:21.908416   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:13:21.911559   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:13:21.921605   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:13:21.924753   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:13:21.933495   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:13:21.936611   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:13:21.945312   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:13:21.948273   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:13:21.957659   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:13:21.960739   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:13:21.969191   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:13:21.972356   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:13:21.981306   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:13:22.001469   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:13:22.021181   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:13:22.040587   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:13:22.060078   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:13:22.079285   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:13:22.098538   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:13:22.118296   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:13:22.137769   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:13:22.156929   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:13:22.176353   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:13:22.195510   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:13:22.209194   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:13:22.222827   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:13:22.236546   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:13:22.250070   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:13:22.263444   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:13:22.276970   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:13:22.290700   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:13:22.294935   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:13:22.304164   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307578   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307635   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.311940   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:13:22.320904   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:13:22.329872   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333271   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333318   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.337523   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:13:22.346681   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:13:22.355874   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359764   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359823   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.364168   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:13:22.373288   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:13:22.376713   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:13:22.381681   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:13:22.386495   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:13:22.390985   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:13:22.395318   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:13:22.399578   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:13:22.403998   20650 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:13:22.404052   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:13:22.404067   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:13:22.404115   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:13:22.417096   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:13:22.417139   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:13:22.417203   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:13:22.426058   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:13:22.426117   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:13:22.434774   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:13:22.448444   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:13:22.461910   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:13:22.475772   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:13:22.478602   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:22.487944   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.594180   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.608389   20650 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:13:22.608597   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:22.629533   20650 out.go:177] * Verifying Kubernetes components...
	I1105 10:13:22.671507   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.795219   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.807186   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:13:22.807391   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:13:22.807429   20650 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:13:22.807616   20650 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:22.807698   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:22.807704   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:22.807711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:22.807714   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.750948   20650 round_trippers.go:574] Response Status: 200 OK in 8943 milliseconds
	I1105 10:13:31.752572   20650 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:13:31.752585   20650 node_ready.go:38] duration metric: took 8.945035646s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:31.752614   20650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:31.752661   20650 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:13:31.752671   20650 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:13:31.752720   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:31.752727   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.752733   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.752738   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.802951   20650 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1105 10:13:31.809829   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.809889   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:13:31.809894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.809900   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.809904   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.814415   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:31.815355   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.815363   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.815369   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.815373   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.822380   20650 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:13:31.822662   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.822672   20650 pod_ready.go:82] duration metric: took 12.826683ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822679   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822728   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:13:31.822733   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.822739   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.822744   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.826328   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.826822   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.826831   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.826837   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.826841   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.829860   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.830181   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.830191   20650 pod_ready.go:82] duration metric: took 7.507226ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830198   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830235   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:13:31.830240   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.830245   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.830252   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.832219   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:31.832697   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.832706   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.832711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.832715   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.835276   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.835692   20650 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.835701   20650 pod_ready.go:82] duration metric: took 5.498306ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835709   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835747   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:13:31.835752   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.835758   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.835762   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.841537   20650 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:13:31.841973   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:31.841981   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.841986   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.841990   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844531   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.844869   20650 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.844879   20650 pod_ready.go:82] duration metric: took 9.164525ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844885   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844921   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:13:31.844926   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.844931   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844936   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.848600   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.952821   20650 request.go:632] Waited for 103.696334ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952860   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952865   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.952873   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.952877   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.955043   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:31.955226   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955236   20650 pod_ready.go:82] duration metric: took 110.346207ms for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:31.955242   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955257   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.153855   20650 request.go:632] Waited for 198.56381ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153901   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153906   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.153912   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.153915   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.156326   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.354721   20650 request.go:632] Waited for 197.883079ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354800   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354808   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.354816   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.354821   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.357314   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.357758   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.357771   20650 pod_ready.go:82] duration metric: took 402.50745ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.357779   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.554904   20650 request.go:632] Waited for 197.060501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555009   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555040   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.555059   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.555071   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.562819   20650 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 10:13:32.752788   20650 request.go:632] Waited for 189.599558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752820   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752825   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.752864   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.752870   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.755075   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.755378   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.755387   20650 pod_ready.go:82] duration metric: took 397.605979ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.755394   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.952787   20650 request.go:632] Waited for 197.357502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952836   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952842   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.952853   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.955636   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.153249   20650 request.go:632] Waited for 196.999871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153317   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153323   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.153329   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.153334   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.155712   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:33.155782   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155797   20650 pod_ready.go:82] duration metric: took 400.400564ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:33.155804   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155810   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.353944   20650 request.go:632] Waited for 198.075152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354021   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354033   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.354041   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.354047   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.356715   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.553130   20650 request.go:632] Waited for 196.01942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553198   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553204   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.553237   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.553242   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.555527   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.555890   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.555899   20650 pod_ready.go:82] duration metric: took 400.086552ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.555906   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.752845   20650 request.go:632] Waited for 196.894857ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752909   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752915   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.752921   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.752925   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.754805   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.953311   20650 request.go:632] Waited for 197.807461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953353   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953381   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.953389   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.953392   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.955376   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.955836   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.955846   20650 pod_ready.go:82] duration metric: took 399.938695ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.955855   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.153021   20650 request.go:632] Waited for 197.093812ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153060   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153065   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.153072   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.153075   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.155546   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:34.353423   20650 request.go:632] Waited for 197.340662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353457   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353463   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.353469   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.353472   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.355383   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.355495   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355514   20650 pod_ready.go:82] duration metric: took 399.657027ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.355524   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355532   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.553620   20650 request.go:632] Waited for 198.034445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553677   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553683   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.553689   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.553694   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.555564   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:34.753369   20650 request.go:632] Waited for 197.394131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753424   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753431   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.753436   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.753440   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.755363   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.755426   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755436   20650 pod_ready.go:82] duration metric: took 399.890345ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.755442   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755446   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.953531   20650 request.go:632] Waited for 198.038372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953615   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953624   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.953631   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.953636   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.955951   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.153813   20650 request.go:632] Waited for 196.981939ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153879   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.153903   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.153910   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.156466   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.157099   20650 pod_ready.go:93] pod "kube-proxy-m45pk" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.157109   20650 pod_ready.go:82] duration metric: took 401.65588ms for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.157117   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.354248   20650 request.go:632] Waited for 197.082179ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354294   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354302   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.354340   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.354347   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.357098   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.552778   20650 request.go:632] Waited for 195.237923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552882   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552910   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.552918   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.552923   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.555242   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.555725   20650 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.555734   20650 pod_ready.go:82] duration metric: took 398.615884ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.555748   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.752802   20650 request.go:632] Waited for 196.982082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752849   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752855   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.752861   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.752865   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.755216   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.953665   20650 request.go:632] Waited for 197.923503ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953733   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953742   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.953751   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.953758   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.955875   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.956268   20650 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.956277   20650 pod_ready.go:82] duration metric: took 400.526917ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.956283   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.153409   20650 request.go:632] Waited for 197.086533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153486   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153496   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.153504   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.153513   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.156474   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.354367   20650 request.go:632] Waited for 197.602225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354401   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354406   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.354421   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.354441   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.356601   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.356994   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.357004   20650 pod_ready.go:82] duration metric: took 400.718541ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.357011   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.554145   20650 request.go:632] Waited for 197.038016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554243   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554252   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.554264   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.554270   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.556774   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.753404   20650 request.go:632] Waited for 196.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753437   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753442   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.753448   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.753452   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.756764   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:36.757112   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.757122   20650 pod_ready.go:82] duration metric: took 400.109512ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.757130   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.953514   20650 request.go:632] Waited for 196.347448ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953546   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953558   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.953565   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.953575   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.955940   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.154619   20650 request.go:632] Waited for 198.194145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154663   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154669   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.154676   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.154695   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.157438   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:37.157524   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157535   20650 pod_ready.go:82] duration metric: took 400.40261ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:37.157542   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157547   20650 pod_ready.go:39] duration metric: took 5.404967892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:37.157569   20650 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:13:37.157646   20650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:13:37.171805   20650 api_server.go:72] duration metric: took 14.563521484s to wait for apiserver process to appear ...
	I1105 10:13:37.171821   20650 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:13:37.171836   20650 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:13:37.176463   20650 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:13:37.176507   20650 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:13:37.176512   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.176518   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.176523   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.177377   20650 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:13:37.177442   20650 api_server.go:141] control plane version: v1.31.2
	I1105 10:13:37.177460   20650 api_server.go:131] duration metric: took 5.62791ms to wait for apiserver health ...
	I1105 10:13:37.177467   20650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:13:37.352914   20650 request.go:632] Waited for 175.404088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352969   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352975   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.352982   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.352986   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.357439   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.362936   20650 system_pods.go:59] 26 kube-system pods found
	I1105 10:13:37.362960   20650 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.362964   20650 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.362968   20650 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.362973   20650 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.362980   20650 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.362984   20650 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.362987   20650 system_pods.go:61] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.362993   20650 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.362999   20650 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.363003   20650 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.363007   20650 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.363013   20650 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.363016   20650 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.363021   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.363024   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.363027   20650 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.363030   20650 system_pods.go:61] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.363036   20650 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 10:13:37.363040   20650 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.363042   20650 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.363046   20650 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.363051   20650 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.363055   20650 system_pods.go:61] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.363059   20650 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.363063   20650 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.363065   20650 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.363070   20650 system_pods.go:74] duration metric: took 185.599377ms to wait for pod list to return data ...
	I1105 10:13:37.363076   20650 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:13:37.554093   20650 request.go:632] Waited for 190.967335ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554130   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554138   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.554152   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.554156   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.557460   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:37.557594   20650 default_sa.go:45] found service account: "default"
	I1105 10:13:37.557604   20650 default_sa.go:55] duration metric: took 194.526347ms for default service account to be created ...
	I1105 10:13:37.557612   20650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:13:37.752842   20650 request.go:632] Waited for 195.185977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752875   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752881   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.752902   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.752907   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.757021   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.762493   20650 system_pods.go:86] 26 kube-system pods found
	I1105 10:13:37.762505   20650 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.762509   20650 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.762512   20650 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.762517   20650 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.762521   20650 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.762525   20650 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.762528   20650 system_pods.go:89] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.762532   20650 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.762535   20650 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.762539   20650 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.762543   20650 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.762548   20650 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.762551   20650 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.762557   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.762561   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.762566   20650 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.762569   20650 system_pods.go:89] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.762572   20650 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:13:37.762575   20650 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.762578   20650 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.762583   20650 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.762590   20650 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.762594   20650 system_pods.go:89] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.762596   20650 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.762600   20650 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.762602   20650 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.762607   20650 system_pods.go:126] duration metric: took 204.991818ms to wait for k8s-apps to be running ...
	I1105 10:13:37.762614   20650 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:13:37.762682   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:13:37.777110   20650 system_svc.go:56] duration metric: took 14.491738ms WaitForService to wait for kubelet
	I1105 10:13:37.777127   20650 kubeadm.go:582] duration metric: took 15.16885159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:13:37.777138   20650 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:13:37.952770   20650 request.go:632] Waited for 175.557407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952816   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952827   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.952839   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.955592   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.956364   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956379   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956390   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956393   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956397   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956399   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956403   20650 node_conditions.go:105] duration metric: took 179.263041ms to run NodePressure ...
	I1105 10:13:37.956411   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:13:37.956426   20650 start.go:255] writing updated cluster config ...
	I1105 10:13:37.978800   20650 out.go:201] 
	I1105 10:13:38.000237   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:38.000353   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.022912   20650 out.go:177] * Starting "ha-213000-m04" worker node in "ha-213000" cluster
	I1105 10:13:38.065816   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:13:38.065838   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:13:38.065942   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:13:38.065952   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:13:38.066024   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.066548   20650 start.go:360] acquireMachinesLock for ha-213000-m04: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:13:38.066601   20650 start.go:364] duration metric: took 39.836µs to acquireMachinesLock for "ha-213000-m04"
	I1105 10:13:38.066614   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:13:38.066619   20650 fix.go:54] fixHost starting: m04
	I1105 10:13:38.066839   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:38.066859   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:38.078183   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59062
	I1105 10:13:38.078511   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:38.078858   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:38.078877   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:38.079111   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:38.079203   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.079308   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:13:38.079392   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.079457   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:13:38.080557   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.080601   20650 fix.go:112] recreateIfNeeded on ha-213000-m04: state=Stopped err=<nil>
	I1105 10:13:38.080610   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	W1105 10:13:38.080695   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:13:38.101909   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m04" ...
	I1105 10:13:38.150121   20650 main.go:141] libmachine: (ha-213000-m04) Calling .Start
	I1105 10:13:38.150270   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.150297   20650 main.go:141] libmachine: (ha-213000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid
	I1105 10:13:38.151495   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.151504   20650 main.go:141] libmachine: (ha-213000-m04) DBG | pid 20571 is in state "Stopped"
	I1105 10:13:38.151536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid...
	I1105 10:13:38.151981   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Using UUID 70721578-92b7-4edc-935c-43ebcacd790c
	I1105 10:13:38.175524   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Generated MAC 1a:a3:f2:a5:2e:39
	I1105 10:13:38.175551   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:13:38.175756   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175805   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175883   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70721578-92b7-4edc-935c-43ebcacd790c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:13:38.175929   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70721578-92b7-4edc-935c-43ebcacd790c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:13:38.175943   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:13:38.177358   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Pid is 20690
	I1105 10:13:38.177760   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Attempt 0
	I1105 10:13:38.177775   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.177790   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:13:38.179817   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Searching for 1a:a3:f2:a5:2e:39 in /var/db/dhcpd_leases ...
	I1105 10:13:38.179881   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:13:38.179891   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:13:38.179930   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:13:38.179944   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:13:38.179961   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:13:38.179966   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found match: 1a:a3:f2:a5:2e:39
	I1105 10:13:38.179974   20650 main.go:141] libmachine: (ha-213000-m04) DBG | IP: 192.169.0.8
	I1105 10:13:38.180001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetConfigRaw
	I1105 10:13:38.180736   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:38.180968   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.181459   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:13:38.181471   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.181605   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:38.181707   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:38.181828   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.181929   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.182026   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:38.182165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:38.182315   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:38.182325   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:13:38.188897   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:13:38.198428   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:13:38.199856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.199886   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.199916   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.199953   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.594841   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:13:38.594856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:13:38.709716   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.709736   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.709743   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.709759   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.710592   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:13:38.710604   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:13:44.475519   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:13:44.475536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:13:44.475546   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:13:44.498793   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:49.237329   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:49.237349   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237488   20650 buildroot.go:166] provisioning hostname "ha-213000-m04"
	I1105 10:13:49.237500   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237590   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.237684   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.237765   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237842   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237935   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.238078   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.238220   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.238229   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m04 && echo "ha-213000-m04" | sudo tee /etc/hostname
	I1105 10:13:49.297417   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m04
	
	I1105 10:13:49.297437   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.297576   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.297673   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297757   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297853   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.297997   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.298162   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.298173   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:49.354308   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:49.354323   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:49.354341   20650 buildroot.go:174] setting up certificates
	I1105 10:13:49.354349   20650 provision.go:84] configureAuth start
	I1105 10:13:49.354357   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.354507   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:49.354606   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.354711   20650 provision.go:143] copyHostCerts
	I1105 10:13:49.354741   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354793   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:49.354799   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354909   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:49.355124   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355155   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:49.355159   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355228   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:49.355419   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355454   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:49.355461   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355528   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:49.355690   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m04 san=[127.0.0.1 192.169.0.8 ha-213000-m04 localhost minikube]
	I1105 10:13:49.396705   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:49.396767   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:49.396780   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.396910   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.397015   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.397117   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.397221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:49.427813   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:49.427885   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:49.447457   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:49.447518   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:49.467286   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:49.467359   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:13:49.487192   20650 provision.go:87] duration metric: took 132.83626ms to configureAuth
	I1105 10:13:49.487209   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:49.487380   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:49.487394   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:49.487531   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.487631   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.487715   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487801   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487890   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.488033   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.488154   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.488162   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:49.537465   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:49.537478   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:49.537561   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:49.537571   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.537704   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.537799   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537884   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537998   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.538165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.538298   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.538345   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:49.598479   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:49.598502   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.598649   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.598747   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598833   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598947   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.599089   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.599234   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.599246   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:51.207763   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:51.207782   20650 machine.go:96] duration metric: took 13.026432223s to provisionDockerMachine
	I1105 10:13:51.207792   20650 start.go:293] postStartSetup for "ha-213000-m04" (driver="hyperkit")
	I1105 10:13:51.207801   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:51.207816   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.208031   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:51.208047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.208140   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.208231   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.208318   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.208438   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.241123   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:51.244240   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:51.244251   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:51.244336   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:51.244477   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:51.244484   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:51.244646   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:51.252753   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:51.271782   20650 start.go:296] duration metric: took 63.980744ms for postStartSetup
	I1105 10:13:51.271803   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.271989   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:51.272001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.272093   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.272178   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.272277   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.272371   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.304392   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:51.304469   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:51.358605   20650 fix.go:56] duration metric: took 13.292102469s for fixHost
	I1105 10:13:51.358630   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.358783   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.358880   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.358963   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.359053   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.359195   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:51.359329   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:51.359336   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:51.407868   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830431.709090009
	
	I1105 10:13:51.407885   20650 fix.go:216] guest clock: 1730830431.709090009
	I1105 10:13:51.407890   20650 fix.go:229] Guest: 2024-11-05 10:13:51.709090009 -0800 PST Remote: 2024-11-05 10:13:51.35862 -0800 PST m=+89.911326584 (delta=350.470009ms)
	I1105 10:13:51.407901   20650 fix.go:200] guest clock delta is within tolerance: 350.470009ms
	I1105 10:13:51.407906   20650 start.go:83] releasing machines lock for "ha-213000-m04", held for 13.34141889s
	I1105 10:13:51.407923   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.408055   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:51.430524   20650 out.go:177] * Found network options:
	I1105 10:13:51.451633   20650 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:13:51.472140   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.472164   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.472179   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472739   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472888   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.473015   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1105 10:13:51.473025   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.473039   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.473047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473124   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:51.473137   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473175   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473286   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473299   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473387   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473400   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473487   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.473517   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473599   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	W1105 10:13:51.501432   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:51.501515   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:51.553972   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:51.553993   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.554083   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.569365   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:51.577607   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:51.586014   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:51.586084   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:51.594293   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.602646   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:51.610969   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.619400   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:51.627741   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:51.635982   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:51.645401   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:51.653565   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:51.660899   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:51.660963   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:51.669419   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:51.677143   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:51.772664   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:51.792178   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.792270   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:51.808083   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.820868   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:51.842221   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.854583   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.865539   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:51.892869   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.904042   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.922494   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:51.928520   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:51.945780   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:51.962437   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:52.060460   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:52.163232   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:52.163260   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:52.178328   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:52.296397   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:14:53.349067   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016016812s)
	I1105 10:14:53.349159   20650 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:14:53.385876   20650 out.go:201] 
	W1105 10:14:53.422606   20650 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:14:53.422674   20650 out.go:270] * 
	W1105 10:14:53.423462   20650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:14:53.533703   20650 out.go:201] 
	
	
	==> Docker <==
	Nov 05 18:14:24 ha-213000 cri-dockerd[1411]: time="2024-11-05T18:14:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.320957280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321014942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321032889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321144470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358583815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358913638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358923588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.359308274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371019459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371180579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371195366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371264075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384883251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384945765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384958316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.385102977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.393595106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396454919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396464389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396559087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:54 ha-213000 dockerd[1151]: time="2024-11-05T18:14:54.321538330Z" level=info msg="ignoring event" container=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322187590Z" level=info msg="shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322448589Z" level=warning msg="cleaning up after shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322490228Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	568ed995df15d       8c811b4aec35f       30 seconds ago       Running             busybox                   2                   f5d092375dddf       busybox-7dff88458-q5j74
	a54d96a8e9e4d       9ca7e41918271       30 seconds ago       Running             kindnet-cni               2                   07702f76ce639       kindnet-hppzk
	820b778421b38       c69fa2e9cbf5f       30 seconds ago       Running             coredns                   2                   bc67a22cb5eff       coredns-7c65d6cfc9-cv2cc
	ca9011bea4440       c69fa2e9cbf5f       30 seconds ago       Running             coredns                   2                   703f8fe612ac5       coredns-7c65d6cfc9-q96rw
	85e7cccdf4831       505d571f5fd56       30 seconds ago       Running             kube-proxy                2                   7a4f7e3a95ced       kube-proxy-s8xxj
	ea27059bb8dad       6e38f40d628db       31 seconds ago       Exited              storage-provisioner       4                   7a18da25cf537       storage-provisioner
	43950f04c89aa       0486b6c53a1b5       About a minute ago   Running             kube-controller-manager   4                   3c4a95766d8df       kube-controller-manager-ha-213000
	8e0c0916fca71       9499c9960544e       About a minute ago   Running             kube-apiserver            4                   f2454c695936e       kube-apiserver-ha-213000
	897300e44633b       baf03d14a86fd       2 minutes ago        Running             kube-vip                  1                   f00a17fab8835       kube-vip-ha-213000
	ad7975173845f       847c7bc1a5418       2 minutes ago        Running             kube-scheduler            2                   5162e28d0e03d       kube-scheduler-ha-213000
	8a28e20a2bf3d       2e96e5913fc06       2 minutes ago        Running             etcd                      2                   acdca4d26c9f6       etcd-ha-213000
	ea0b432d94423       0486b6c53a1b5       2 minutes ago        Exited              kube-controller-manager   3                   3c4a95766d8df       kube-controller-manager-ha-213000
	16b5e8baed219       9499c9960544e       2 minutes ago        Exited              kube-apiserver            3                   f2454c695936e       kube-apiserver-ha-213000
	6668904ee766d       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       3                   58ac997dc49ae       storage-provisioner
	96799b06e508f       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   07d926acb1a6e       busybox-7dff88458-q5j74
	86ef547964bcb       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   5fe3e01a4f33a       coredns-7c65d6cfc9-q96rw
	dd08019aca606       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   00f7c155eb4b0       coredns-7c65d6cfc9-cv2cc
	4aec0d02658e0       505d571f5fd56       4 minutes ago        Exited              kube-proxy                1                   1ece5e2bcaf09       kube-proxy-s8xxj
	f9a05b099e4ee       9ca7e41918271       4 minutes ago        Exited              kindnet-cni               1                   fd311d6ed9c5c       kindnet-hppzk
	51c2df7fc859d       baf03d14a86fd       5 minutes ago        Exited              kube-vip                  0                   98323683c9082       kube-vip-ha-213000
	bdbc1a6e54924       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   474c9f706901d       etcd-ha-213000
	f1607d6ea7a30       847c7bc1a5418       5 minutes ago        Exited              kube-scheduler            1                   b217215a9cf0c       kube-scheduler-ha-213000
	
	
	==> coredns [820b778421b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59240 - 59060 "HINFO IN 4329632244317726903.7890662898760833477. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011788676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[675101378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.641) (total time: 30001ms):
	Trace[675101378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.641)
	Trace[675101378]: [30.00107355s] [30.00107355s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[792881874]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30001ms):
	Trace[792881874]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:14:54.642)
	Trace[792881874]: [30.001711346s] [30.001711346s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[34248386]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[34248386]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[34248386]: [30.000366606s] [30.000366606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [86ef547964bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33774 - 54633 "HINFO IN 1409488340311598538.4125883895955909161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004156009s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1322590960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1322590960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.870)
	Trace[1322590960]: [30.003129161s] [30.003129161s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1548400132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30002ms):
	Trace[1548400132]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:11:00.870)
	Trace[1548400132]: [30.002952972s] [30.002952972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1633349832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.870) (total time: 30002ms):
	Trace[1633349832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[1633349832]: [30.002091533s] [30.002091533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca9011bea444] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47030 - 28453 "HINFO IN 9030478600017221968.7137590874178245370. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011696462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[954770416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30002ms):
	Trace[954770416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:14:54.642)
	Trace[954770416]: [30.002259073s] [30.002259073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1172241105]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1172241105]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[1172241105]: [30.000198867s] [30.000198867s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1149531028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1149531028]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.645)
	Trace[1149531028]: [30.000272321s] [30.000272321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [dd08019aca60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56311 - 34269 "HINFO IN 2200850437967647570.948968209837946997. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0110095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819586440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30001ms):
	Trace[819586440]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:11:00.870)
	Trace[819586440]: [30.001860838s] [30.001860838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[58172056]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.869) (total time: 30000ms):
	Trace[58172056]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[58172056]: [30.000759284s] [30.000759284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1700347832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1700347832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.871)
	Trace[1700347832]: [30.003960758s] [30.003960758s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:14:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1892e4225dd5499cb35e29ff753a0c40
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    872d5ac1-d893-413e-b883-f1ad425b7c82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 30s                    kube-proxy       
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           80s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           80s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:14:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dc248d7debd421bb4108dc092da24e0
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    8a40793c-3b3c-49c9-a112-66a753c3fa07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 4m44s              kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           12m                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             8m40s              node-controller  Node ha-213000-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)    kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m48s              node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           4m47s              node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           4m5s               node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s (x7 over 92s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           80s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           80s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:11:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 efb6d3b228624c8f9582b78a04751815
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    6405d175-8027-4e75-bb1e-1845fbf67784
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-28tbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kindnet-p4bx6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m56s
	  kube-system                 kube-proxy-m45pk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m49s                  kube-proxy       
	  Normal   Starting                 3m11s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     9m56s (x2 over 9m57s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    9m56s (x2 over 9m57s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m56s (x2 over 9m57s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           9m55s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           9m54s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeReady                9m34s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m48s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             4m8s                   node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m13s (x2 over 3m13s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m13s (x2 over 3m13s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m13s (x2 over 3m13s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m13s                  kubelet          Node ha-213000-m04 has been rebooted, boot id: 6405d175-8027-4e75-bb1e-1845fbf67784
	  Normal   NodeReady                3m13s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           80s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           80s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             40s                    node-controller  Node ha-213000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036175] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007972] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.844917] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000007] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006614] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702887] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.233657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.342806] systemd-fstab-generator[457]: Ignoring "noauto" option for root device
	[  +0.102790] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +2.007272] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.269734] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.085327] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.060857] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.057582] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.475879] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.104726] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.119211] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.130514] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.455084] systemd-fstab-generator[1568]: Ignoring "noauto" option for root device
	[  +6.862189] kauditd_printk_skb: 190 callbacks suppressed
	[Nov 5 18:13] kauditd_printk_skb: 40 callbacks suppressed
	[Nov 5 18:14] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8a28e20a2bf3] <==
	{"level":"info","ts":"2024-11-05T18:13:31.135398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from 585aaf1d56a73c02 at term 3"}
	{"level":"info","ts":"2024-11-05T18:13:31.135413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-11-05T18:13:31.135422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.135426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.135442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 3001] sent MsgVote request to 585aaf1d56a73c02 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from 585aaf1d56a73c02 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-11-05T18:13:31.139678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"warn","ts":"2024-11-05T18:13:31.139920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.668851654s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-11-05T18:13:31.139942Z","caller":"traceutil/trace.go:171","msg":"trace[1810206807] range","detail":"{range_begin:; range_end:; }","duration":"1.668918965s","start":"2024-11-05T18:13:29.471018Z","end":"2024-11-05T18:13:31.139937Z","steps":["trace[1810206807] 'agreement among raft nodes before linearized reading'  (duration: 1.668850533s)"],"step_count":1}
	{"level":"error","ts":"2024-11-05T18:13:31.139988Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-11-05T18:13:31.146507Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-213000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:13:31.146769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:13:31.147253Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:13:31.148572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:13:31.149600Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-11-05T18:13:31.149813Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T18:13:31.149866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T18:13:31.148984Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:13:31.150885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-11-05T18:13:31.153408Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-11-05T18:13:31.155499Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-11-05T18:13:31.156813Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-11-05T18:14:55.084484Z","caller":"traceutil/trace.go:171","msg":"trace[689855107] transaction","detail":"{read_only:false; response_revision:2931; number_of_response:1; }","duration":"110.3233ms","start":"2024-11-05T18:14:54.974150Z","end":"2024-11-05T18:14:55.084473Z","steps":["trace[689855107] 'process raft request'  (duration: 110.263526ms)"],"step_count":1}
	
	
	==> etcd [bdbc1a6e5492] <==
	{"level":"warn","ts":"2024-11-05T18:12:13.699058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:09.275669Z","time spent":"4.423385981s","remote":"127.0.0.1:52268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:13.283499Z","time spent":"415.604721ms","remote":"127.0.0.1:52350","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.487277082s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699158Z","caller":"traceutil/trace.go:171","msg":"trace[1772748615] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"7.487289106s","start":"2024-11-05T18:12:06.211867Z","end":"2024-11-05T18:12:13.699156Z","steps":["trace[1772748615] 'agreement among raft nodes before linearized reading'  (duration: 7.487277083s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699169Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:06.211838Z","time spent":"7.487327421s","remote":"127.0.0.1:52456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.037776693s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699221Z","caller":"traceutil/trace.go:171","msg":"trace[763418090] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"2.037787826s","start":"2024-11-05T18:12:11.661430Z","end":"2024-11-05T18:12:13.699218Z","steps":["trace[763418090] 'agreement among raft nodes before linearized reading'  (duration: 2.037776524s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:11.661414Z","time spent":"2.03781384s","remote":"127.0.0.1:52228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.734339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-05T18:12:13.734385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-11-05T18:12:13.734444Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-11-05T18:12:13.734706Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734723Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734820Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734866Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.735810Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735871Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735879Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-213000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 18:14:56 up 2 min,  0 users,  load average: 0.10, 0.10, 0.04
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a54d96a8e9e4] <==
	I1105 18:14:25.104544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1105 18:14:25.791429       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	add table inet kindnet-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I1105 18:14:35.800964       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:35.801150       1 main.go:301] handling current node
	I1105 18:14:35.801980       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:35.802041       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:35.802606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.6 Flags: [] Table: 0 Realm: 0} 
	I1105 18:14:35.802866       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:35.802935       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:14:35.804797       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0 Realm: 0} 
	I1105 18:14:45.792345       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:45.792414       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:45.792632       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:45.792668       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:14:45.792764       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:45.792808       1 main.go:301] handling current node
	I1105 18:14:55.801709       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:55.801907       1 main.go:301] handling current node
	I1105 18:14:55.801962       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:55.801980       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:55.802165       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:55.802236       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f9a05b099e4e] <==
	I1105 18:11:41.574590       1 main.go:301] handling current node
	I1105 18:11:41.574599       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:41.574604       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:41.574749       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:41.574789       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567175       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:11:51.567282       1 main.go:301] handling current node
	I1105 18:11:51.567311       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:51.567325       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:51.567514       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:51.567574       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567879       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:11:51.567959       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:01.566316       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:01.566340       1 main.go:301] handling current node
	I1105 18:12:01.566353       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:01.566358       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:01.566565       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:01.566573       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:11.571151       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:11.571336       1 main.go:301] handling current node
	I1105 18:12:11.571478       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:11.571602       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:11.572596       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:11.572626       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [16b5e8baed21] <==
	I1105 18:12:47.610850       1 options.go:228] external host was not specified, using 192.169.0.5
	I1105 18:12:47.613755       1 server.go:142] Version: v1.31.2
	I1105 18:12:47.614011       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.895871       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:12:48.898884       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:12:48.901520       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:12:48.901573       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:12:48.902234       1 instance.go:232] Using reconciler: lease
	W1105 18:13:08.892813       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1105 18:13:08.896286       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1105 18:13:08.903685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W1105 18:13:08.903693       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [8e0c0916fca7] <==
	I1105 18:13:32.048504       1 establishing_controller.go:81] Starting EstablishingController
	I1105 18:13:32.048599       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1105 18:13:32.048646       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1105 18:13:32.048673       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1105 18:13:32.111932       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:13:32.112352       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:13:32.112415       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:13:32.112712       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:13:32.112790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:13:32.115714       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:13:32.115760       1 policy_source.go:224] refreshing policies
	I1105 18:13:32.115832       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:13:32.118673       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:13:32.126538       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:13:32.129328       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:13:32.136801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:13:32.137650       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:13:32.137679       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:13:32.137683       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:13:32.137688       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:13:32.144136       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1105 18:13:32.162460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1105 18:13:33.018201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:13:33.274965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:14:23.399590       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [43950f04c89a] <==
	I1105 18:14:15.564177       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5ldvg"
	I1105 18:14:15.564353       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-213000-m03"
	I1105 18:14:15.565183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.411µs"
	I1105 18:14:15.590695       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-213000-m03"
	I1105 18:14:15.590731       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-trfhn"
	I1105 18:14:15.610087       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-trfhn"
	I1105 18:14:15.610123       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-213000-m03"
	E1105 18:14:15.613786       1 gc_controller.go:255] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4589347d-3131-41ad-822d-d41f3e03a634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-11-05T18:14:15Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-213000-m03\": pods \"kube-vip-ha-213000-m03\" not found" logger="UnhandledError"
	I1105 18:14:15.615307       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-213000-m03"
	I1105 18:14:15.635144       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-213000-m03"
	I1105 18:14:20.621696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:14:23.416708       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2rcm6\": the object has been modified; please apply your changes to the latest version and try again"
	I1105 18:14:23.416951       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"eea44333-75c8-4ade-8223-0ee24b6f9ab0", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2rcm6": the object has been modified; please apply your changes to the latest version and try again
	I1105 18:14:23.435993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.128077ms"
	I1105 18:14:23.436289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.743µs"
	I1105 18:14:23.503484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.726592ms"
	I1105 18:14:23.503948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.593µs"
	I1105 18:14:23.564006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.467814ms"
	I1105 18:14:23.564310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.074µs"
	I1105 18:14:25.752475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.649291ms"
	I1105 18:14:25.752678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.633µs"
	I1105 18:14:25.765769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.252µs"
	I1105 18:14:25.785523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.091µs"
	I1105 18:14:25.792738       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2rcm6\": the object has been modified; please apply your changes to the latest version and try again"
	I1105 18:14:25.793122       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"eea44333-75c8-4ade-8223-0ee24b6f9ab0", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2rcm6": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [ea0b432d9442] <==
	I1105 18:12:48.246520       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:12:48.777745       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:12:48.777814       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.783136       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:12:48.783574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:12:48.783729       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:12:48.783931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1105 18:13:09.910735       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [4aec0d02658e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:10:30.967416       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:10:30.985864       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:10:30.985986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:10:31.019992       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:10:31.020085       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:10:31.020128       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:10:31.022301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:10:31.022843       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:10:31.022888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:10:31.026969       1 config.go:199] "Starting service config controller"
	I1105 18:10:31.027078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:10:31.027666       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:10:31.027692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:10:31.028138       1 config.go:328] "Starting node config controller"
	I1105 18:10:31.028170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:10:31.130453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:10:31.130459       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:10:31.130467       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [85e7cccdf483] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:14:24.812805       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:14:24.832536       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:14:24.832803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:14:24.864245       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:14:24.864284       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:14:24.864314       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:14:24.866476       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:14:24.868976       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:14:24.869009       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:14:24.872199       1 config.go:199] "Starting service config controller"
	I1105 18:14:24.872427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:14:24.872629       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:14:24.872656       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:14:24.874721       1 config.go:328] "Starting node config controller"
	I1105 18:14:24.874748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:14:24.974138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:14:24.974427       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:14:24.975147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad7975173845] <==
	W1105 18:13:17.072213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.072242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.177384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.177607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.472456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.472508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.646303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.646354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.851021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.851072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:18.674193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:18.674222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.133550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.133602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.167612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.167767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.410336       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.410541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.515934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.516006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.540843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.540926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.825617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.825717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I1105 18:13:32.157389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1607d6ea7a3] <==
	W1105 18:10:03.671887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 18:10:03.671970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 18:10:03.672285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:10:03.672503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.672829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672954       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 18:10:03.673005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:10:03.673161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 18:10:03.673298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.673427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.703301       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 18:10:03.703348       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1105 18:10:27.397168       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:11:49.191240       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	E1105 18:11:49.193101       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	I1105 18:11:49.193402       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-28tbv" node="ha-213000-m04"
	I1105 18:12:13.753881       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1105 18:12:13.756404       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1105 18:12:13.756765       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.440521    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.541552    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.641846    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.742792    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.844458    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.945965    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: E1105 18:14:23.047096    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.299353    1575 apiserver.go:52] "Watching apiserver"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.401536    1575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.426959    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-xtables-lock\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427025    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-cni-cfg\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427041    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-lib-modules\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427052    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e7f00930-b382-473c-be59-04504c6e23ff-tmp\") pod \"storage-provisioner\" (UID: \"e7f00930-b382-473c-be59-04504c6e23ff\") " pod="kube-system/storage-provisioner"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427090    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-xtables-lock\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427103    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-lib-modules\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.446313    1575 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 05 18:14:24 ha-213000 kubelet[1575]: I1105 18:14:24.613521    1575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b"
	Nov 05 18:14:40 ha-213000 kubelet[1575]: E1105 18:14:40.279613    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971252    1575 scope.go:117] "RemoveContainer" containerID="6668904ee766d56b8d55ddf5af906befaf694e0933fdf7c8fdb3b42a676d0fb3"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971818    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: E1105 18:14:54.971979    1575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e7f00930-b382-473c-be59-04504c6e23ff)\"" pod="kube-system/storage-provisioner" podUID="e7f00930-b382-473c-be59-04504c6e23ff"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (156.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:415: expected profile "ha-213000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-213000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-213000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACo
unt\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-213000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device
-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimization
s\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (3.081224293s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000 -v=7                                                                                                       | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-213000 -v=7                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:08 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST |                     |
	| node    | ha-213000 node delete m03 -v=7                                                                                               | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-213000 stop -v=7                                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:12 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true                                                                                                     | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:12 PST |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:12:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:12:21.490688   20650 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.490996   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491002   20650 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.491006   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491183   20650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.492670   20650 out.go:352] Setting JSON to false
	I1105 10:12:21.523908   20650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7910,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:12:21.523997   20650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:12:21.546247   20650 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:12:21.588131   20650 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:12:21.588174   20650 notify.go:220] Checking for updates...
	I1105 10:12:21.632868   20650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:21.654057   20650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:12:21.674788   20650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:12:21.696036   20650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:12:21.717022   20650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:12:21.738560   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.739289   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.739362   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.752070   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59007
	I1105 10:12:21.752427   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.752834   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.752843   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.753115   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.753236   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.753425   20650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:12:21.753684   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.753710   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.764480   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59009
	I1105 10:12:21.764817   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.765142   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.765158   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.765399   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.765513   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.796815   20650 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:12:21.838784   20650 start.go:297] selected driver: hyperkit
	I1105 10:12:21.838816   20650 start.go:901] validating driver "hyperkit" against &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.839082   20650 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:12:21.839288   20650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.839546   20650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:12:21.851704   20650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:12:21.858679   20650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.858708   20650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:12:21.864360   20650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:12:21.864394   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:21.864431   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:21.864510   20650 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.864624   20650 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.886086   20650 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:12:21.927848   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:21.927921   20650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:12:21.927965   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:21.928204   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:21.928223   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:21.928393   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:21.929303   20650 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:21.929483   20650 start.go:364] duration metric: took 156.606µs to acquireMachinesLock for "ha-213000"
	I1105 10:12:21.929515   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:21.929530   20650 fix.go:54] fixHost starting: 
	I1105 10:12:21.929991   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.930022   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.941843   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59011
	I1105 10:12:21.942146   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.942523   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.942539   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.942770   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.942869   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.942962   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.943046   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.943124   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.944238   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.944273   20650 fix.go:112] recreateIfNeeded on ha-213000: state=Stopped err=<nil>
	I1105 10:12:21.944288   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	W1105 10:12:21.944375   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:21.965704   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000" ...
	I1105 10:12:21.986830   20650 main.go:141] libmachine: (ha-213000) Calling .Start
	I1105 10:12:21.986975   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.987000   20650 main.go:141] libmachine: (ha-213000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:12:21.988429   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.988437   20650 main.go:141] libmachine: (ha-213000) DBG | pid 20508 is in state "Stopped"
	I1105 10:12:21.988449   20650 main.go:141] libmachine: (ha-213000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid...
	I1105 10:12:21.988605   20650 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:12:22.098530   20650 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:12:22.098573   20650 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:22.098772   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098813   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098872   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:22.098916   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:22.098942   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:22.100556   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Pid is 20664
	I1105 10:12:22.101143   20650 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:12:22.101159   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:22.101260   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:12:22.103059   20650 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:12:22.103211   20650 main.go:141] libmachine: (ha-213000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:22.103230   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:22.103244   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:22.103282   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:22.103300   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6d37}
	I1105 10:12:22.103320   20650 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:12:22.103326   20650 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:12:22.103333   20650 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:12:22.104301   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:22.104508   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:22.104940   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:22.104951   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:22.105084   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:22.105206   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:22.105334   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105499   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105662   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:22.106057   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:22.106277   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:22.106287   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:22.111841   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:22.167275   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:22.168436   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.168488   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.168505   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.168538   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.563375   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:22.563390   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:22.678087   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.678107   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.678118   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.678127   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.678997   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:22.679010   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:28.419344   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:28.419383   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:28.419395   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:28.443700   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:12:33.165174   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:12:33.165187   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165353   20650 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:12:33.165363   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165462   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.165555   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.165665   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165766   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.166032   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.166168   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.166176   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:12:33.233946   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:12:33.233965   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.234107   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.234213   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234303   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234419   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.234574   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.234722   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.234733   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:12:33.296276   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:12:33.296296   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:12:33.296314   20650 buildroot.go:174] setting up certificates
	I1105 10:12:33.296331   20650 provision.go:84] configureAuth start
	I1105 10:12:33.296340   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.296489   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:33.296589   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.296674   20650 provision.go:143] copyHostCerts
	I1105 10:12:33.296705   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296779   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:12:33.296787   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296976   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:12:33.297202   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297251   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:12:33.297256   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297953   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:12:33.298150   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298199   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:12:33.298205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298290   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:12:33.298468   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:12:33.417814   20650 provision.go:177] copyRemoteCerts
	I1105 10:12:33.417886   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:12:33.417904   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.418044   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.418142   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.418231   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.418333   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:33.452233   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:12:33.452305   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 10:12:33.471837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:12:33.471904   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:12:33.491510   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:12:33.491572   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:12:33.511221   20650 provision.go:87] duration metric: took 214.877215ms to configureAuth
	I1105 10:12:33.511235   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:12:33.511399   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:33.511412   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:33.511554   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.511653   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.511767   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511859   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511944   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.512074   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.512201   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.512209   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:12:33.567448   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:12:33.567460   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:12:33.567540   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:12:33.567552   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.567685   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.567782   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567875   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567957   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.568105   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.568243   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.568289   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:12:33.633746   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:12:33.633770   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.633912   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.634017   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634113   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634221   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.634373   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.634523   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.634538   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:12:35.361033   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:12:35.361047   20650 machine.go:96] duration metric: took 13.256219662s to provisionDockerMachine
	I1105 10:12:35.361058   20650 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:12:35.361081   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:12:35.361095   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.361306   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:12:35.361323   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.361415   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.361506   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.361580   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.361669   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.396970   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:12:35.400946   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:12:35.400961   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:12:35.401074   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:12:35.401496   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:12:35.401503   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:12:35.401766   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:12:35.411536   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:35.443784   20650 start.go:296] duration metric: took 82.704716ms for postStartSetup
	I1105 10:12:35.443806   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.444003   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:12:35.444016   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.444100   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.444180   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.444258   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.444349   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.477407   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:12:35.477482   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:12:35.509435   20650 fix.go:56] duration metric: took 13.580030444s for fixHost
	I1105 10:12:35.509456   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.509592   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.509688   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509776   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.510031   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:35.510178   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:35.510185   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:12:35.565839   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830355.864292832
	
	I1105 10:12:35.565852   20650 fix.go:216] guest clock: 1730830355.864292832
	I1105 10:12:35.565857   20650 fix.go:229] Guest: 2024-11-05 10:12:35.864292832 -0800 PST Remote: 2024-11-05 10:12:35.509447 -0800 PST m=+14.061466364 (delta=354.845832ms)
	I1105 10:12:35.565875   20650 fix.go:200] guest clock delta is within tolerance: 354.845832ms
	I1105 10:12:35.565882   20650 start.go:83] releasing machines lock for "ha-213000", held for 13.636511126s
	I1105 10:12:35.565900   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566049   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:35.566151   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566446   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566554   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566709   20650 ssh_runner.go:195] Run: cat /version.json
	I1105 10:12:35.566721   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.566806   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.566888   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.566979   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567064   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.567357   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:12:35.567386   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.567477   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.567559   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.567637   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567715   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.649786   20650 ssh_runner.go:195] Run: systemctl --version
	I1105 10:12:35.655155   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:12:35.659391   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:12:35.659449   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:12:35.672884   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:12:35.672896   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.672997   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:35.691142   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:12:35.700361   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:12:35.709604   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:12:35.709664   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:12:35.718677   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.727574   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:12:35.736665   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.745463   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:12:35.754435   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:12:35.763449   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:12:35.772263   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:12:35.781386   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:12:35.789651   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:12:35.789704   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:12:35.798805   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:12:35.807011   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:35.912193   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:12:35.927985   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.928078   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:12:35.940041   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.954880   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:12:35.969797   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.981073   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:35.992124   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:12:36.016061   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:36.027432   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:36.042843   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:12:36.045910   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:12:36.054070   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:12:36.067653   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:12:36.164803   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:12:36.262358   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:12:36.262434   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:12:36.276549   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:36.372055   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:12:38.718640   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346585524s)
	I1105 10:12:38.718725   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:12:38.729009   20650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:12:38.741745   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:38.752392   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:12:38.846699   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:12:38.961329   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.072900   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:12:39.086802   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:39.097743   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.205555   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:12:39.272726   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:12:39.273861   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:12:39.278279   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:12:39.278336   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:12:39.281386   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:12:39.307263   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:12:39.307378   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.325423   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.384603   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:12:39.384677   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:39.385383   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:12:39.389204   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.398876   20650 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:12:39.398970   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:39.399044   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.411346   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.411370   20650 docker.go:619] Images already preloaded, skipping extraction
	I1105 10:12:39.411458   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.424491   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.424511   20650 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:12:39.424518   20650 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:12:39.424600   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:12:39.424690   20650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:12:39.458782   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:39.458796   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:39.458807   20650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:12:39.458824   20650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:12:39.458910   20650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:12:39.458922   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:12:39.459000   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:12:39.472063   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:12:39.472130   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:12:39.472197   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:12:39.480694   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:12:39.480761   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:12:39.488010   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:12:39.501448   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:12:39.514699   20650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:12:39.528604   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:12:39.542711   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:12:39.545676   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.555042   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.651842   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:12:39.666232   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:12:39.666245   20650 certs.go:194] generating shared ca certs ...
	I1105 10:12:39.666254   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.666455   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:12:39.666548   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:12:39.666558   20650 certs.go:256] generating profile certs ...
	I1105 10:12:39.666641   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:12:39.666660   20650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b
	I1105 10:12:39.666677   20650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:12:39.768951   20650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b ...
	I1105 10:12:39.768965   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b: {Name:mk94691c5901a2a72a9bc83f127c5282216d457c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.769986   20650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b ...
	I1105 10:12:39.770003   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b: {Name:mk80fa552a8414775a1a2e3534b5be60adeae6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.770739   20650 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:12:39.770972   20650 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:12:39.771252   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:12:39.771262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:12:39.771288   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:12:39.771314   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:12:39.771335   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:12:39.771353   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:12:39.771376   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:12:39.771395   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:12:39.771413   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:12:39.771524   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:12:39.771579   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:12:39.771588   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:12:39.771622   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:12:39.771657   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:12:39.771686   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:12:39.771750   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:39.771787   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:12:39.771817   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:39.771836   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:12:39.772313   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:12:39.799103   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:12:39.823713   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:12:39.848122   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:12:39.876362   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:12:39.898968   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:12:39.924496   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:12:39.975578   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:12:40.017567   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:12:40.062386   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:12:40.134510   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:12:40.170763   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:12:40.196135   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:12:40.201525   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:12:40.214259   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222331   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222400   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.235959   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:12:40.247519   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:12:40.256007   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259529   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259576   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.263770   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:12:40.272126   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:12:40.280328   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283753   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283804   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.288095   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:12:40.296378   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:12:40.300009   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:12:40.304421   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:12:40.309440   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:12:40.314156   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:12:40.318720   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:12:40.323054   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:12:40.327653   20650 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:40.327789   20650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:12:40.338896   20650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:12:40.346426   20650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 10:12:40.346451   20650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 10:12:40.346505   20650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 10:12:40.354659   20650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:12:40.354973   20650 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-213000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355052   20650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19910-17277/kubeconfig needs updating (will repair): [kubeconfig missing "ha-213000" cluster setting kubeconfig missing "ha-213000" context setting]
	I1105 10:12:40.355252   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.355659   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355866   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:12:40.356225   20650 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:12:40.356390   20650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 10:12:40.363779   20650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1105 10:12:40.363792   20650 kubeadm.go:597] duration metric: took 17.337248ms to restartPrimaryControlPlane
	I1105 10:12:40.363798   20650 kubeadm.go:394] duration metric: took 36.151791ms to StartCluster
	I1105 10:12:40.363807   20650 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.363904   20650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.364287   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.364493   20650 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:12:40.364506   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:12:40.364518   20650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:12:40.364641   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.406496   20650 out.go:177] * Enabled addons: 
	I1105 10:12:40.427423   20650 addons.go:510] duration metric: took 62.890869ms for enable addons: enabled=[]
	I1105 10:12:40.427463   20650 start.go:246] waiting for cluster config update ...
	I1105 10:12:40.427476   20650 start.go:255] writing updated cluster config ...
	I1105 10:12:40.449627   20650 out.go:201] 
	I1105 10:12:40.470603   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.470682   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.492690   20650 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:12:40.534643   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:40.534678   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:40.534889   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:40.534908   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:40.535035   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.535960   20650 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:40.536081   20650 start.go:364] duration metric: took 95.311µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:12:40.536107   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:40.536116   20650 fix.go:54] fixHost starting: m02
	I1105 10:12:40.536544   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:40.536591   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:40.548252   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59033
	I1105 10:12:40.548561   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:40.548918   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:40.548932   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:40.549159   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:40.549276   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.549386   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:40.549477   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.549545   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:40.550641   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.550670   20650 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:12:40.550679   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:12:40.550782   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:40.571623   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:12:40.592623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:12:40.592918   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.592966   20650 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:12:40.594491   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.594501   20650 main.go:141] libmachine: (ha-213000-m02) DBG | pid 20524 is in state "Stopped"
	I1105 10:12:40.594516   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:12:40.594967   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:12:40.619713   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:12:40.619737   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:40.619893   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619922   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619952   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:40.619999   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:40.620018   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:40.621465   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Pid is 20673
	I1105 10:12:40.621946   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:12:40.621963   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.622060   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:12:40.623801   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:12:40.623940   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:40.623961   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:12:40.623986   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:40.624000   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:40.624015   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:40.624016   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:12:40.624023   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:12:40.624043   20650 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:12:40.624734   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:12:40.624956   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.625445   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:40.625455   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.625562   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:12:40.625653   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:12:40.625748   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.625874   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.626045   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:12:40.626222   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:40.626362   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:12:40.626369   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:40.631955   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:40.641267   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:40.642527   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:40.642544   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:40.642551   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:40.642561   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.034838   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:41.034853   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:41.149888   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:41.149903   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:41.149911   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:41.149917   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.150684   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:41.150696   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:46.914486   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:46.914552   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:46.914564   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:46.937828   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:15.697814   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:15.697829   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.697958   20650 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:13:15.697969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.698068   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.698166   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.698262   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698349   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698429   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.698590   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.698739   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.698748   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:13:15.770158   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:13:15.770174   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.770319   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.770428   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770526   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.770785   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.770922   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.770933   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:15.838124   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:15.838139   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:15.838159   20650 buildroot.go:174] setting up certificates
	I1105 10:13:15.838166   20650 provision.go:84] configureAuth start
	I1105 10:13:15.838173   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.838309   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:15.838391   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.838477   20650 provision.go:143] copyHostCerts
	I1105 10:13:15.838504   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838551   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:15.838557   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838677   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:15.838892   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.838922   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:15.838926   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.839007   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:15.839169   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839200   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:15.839205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839275   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:15.839440   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:13:15.878682   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:15.878747   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:15.878761   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.878912   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.879015   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.879122   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.879221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:15.916727   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:15.916795   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:15.936280   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:15.936341   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:13:15.956339   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:15.956417   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:15.976131   20650 provision.go:87] duration metric: took 137.957663ms to configureAuth
	I1105 10:13:15.976145   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:15.976324   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:15.976339   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:15.976475   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.976573   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.976661   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976740   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976813   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.976940   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.977065   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.977072   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:16.038725   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:16.038739   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:16.038839   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:16.038851   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.038998   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.039098   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039192   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039283   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.039436   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.039572   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.039618   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:16.112446   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:16.112468   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.112623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.112715   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112811   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112892   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.113049   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.113223   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.113236   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:17.783702   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:17.783717   20650 machine.go:96] duration metric: took 37.158599705s to provisionDockerMachine
	I1105 10:13:17.783726   20650 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:13:17.783733   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:17.783744   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.783939   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:17.783953   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.784616   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.785152   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.785404   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.785500   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.822226   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:17.825293   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:17.825304   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:17.825392   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:17.825532   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:17.825538   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:17.825699   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:17.832977   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:17.852599   20650 start.go:296] duration metric: took 68.865935ms for postStartSetup
	I1105 10:13:17.852645   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.852828   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:17.852840   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.852946   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.853034   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.853111   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.853195   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.891315   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:17.891389   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:17.944504   20650 fix.go:56] duration metric: took 37.408724528s for fixHost
	I1105 10:13:17.944528   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.944681   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.944779   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944880   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944973   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.945125   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:17.945257   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:17.945264   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:18.009463   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830397.963598694
	
	I1105 10:13:18.009476   20650 fix.go:216] guest clock: 1730830397.963598694
	I1105 10:13:18.009482   20650 fix.go:229] Guest: 2024-11-05 10:13:17.963598694 -0800 PST Remote: 2024-11-05 10:13:17.944519 -0800 PST m=+56.496923048 (delta=19.079694ms)
	I1105 10:13:18.009492   20650 fix.go:200] guest clock delta is within tolerance: 19.079694ms
	I1105 10:13:18.009495   20650 start.go:83] releasing machines lock for "ha-213000-m02", held for 37.47374268s
	I1105 10:13:18.009512   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.009649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:18.032281   20650 out.go:177] * Found network options:
	I1105 10:13:18.052088   20650 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:13:18.073014   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.073053   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.073969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074186   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074319   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:13:18.074355   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:13:18.074369   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.074467   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:18.074483   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:18.074488   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074646   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074801   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.074850   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074993   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:18.075008   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.075127   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:13:18.108947   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:18.109027   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:18.155414   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:18.155436   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.155551   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.172114   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:18.180388   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:18.188528   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.188587   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:18.196712   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.204897   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:18.213206   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.221579   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:18.230149   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:18.238366   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:18.246617   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:18.255037   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:18.262631   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:18.262690   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:18.270933   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:18.278375   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.375712   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:18.394397   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.394485   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:18.410636   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.423391   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:18.441876   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.452612   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.462897   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:18.485662   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.495897   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.511009   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:18.513991   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:18.521476   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:18.534868   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:18.632191   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:18.734981   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.735009   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:18.749050   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.853897   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:13:21.134871   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28097554s)
	I1105 10:13:21.134948   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:13:21.146360   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.157264   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:13:21.267741   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:13:21.382285   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.483458   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:13:21.496077   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.506512   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.618640   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:13:21.685448   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:13:21.685559   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:13:21.689888   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:13:21.689958   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:13:21.693059   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:13:21.721401   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:13:21.721489   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.737796   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.775162   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:13:21.818311   20650 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:13:21.839158   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:21.839596   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:13:21.844257   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:21.854347   20650 mustload.go:65] Loading cluster: ha-213000
	I1105 10:13:21.854526   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:21.854763   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.854810   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.866117   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59055
	I1105 10:13:21.866449   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.866785   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.866795   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.867005   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.867094   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:13:21.867180   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:21.867248   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:13:21.868436   20650 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:13:21.868696   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.868721   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.879648   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59057
	I1105 10:13:21.879951   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.880304   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.880326   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.880564   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.880680   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:13:21.880800   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:13:21.880806   20650 certs.go:194] generating shared ca certs ...
	I1105 10:13:21.880817   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:13:21.880976   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:13:21.881033   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:13:21.881041   20650 certs.go:256] generating profile certs ...
	I1105 10:13:21.881133   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:13:21.881677   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:13:21.881747   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:13:21.881756   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:13:21.881777   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:13:21.881800   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:13:21.881819   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:13:21.881837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:13:21.881855   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:13:21.881874   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:13:21.881891   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:13:21.881971   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:13:21.882008   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:13:21.882016   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:13:21.882051   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:13:21.882090   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:13:21.882131   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:13:21.882199   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:21.882240   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:21.882262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:13:21.882285   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:13:21.882314   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:13:21.882395   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:13:21.882480   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:13:21.882563   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:13:21.882639   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:13:21.908416   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:13:21.911559   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:13:21.921605   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:13:21.924753   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:13:21.933495   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:13:21.936611   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:13:21.945312   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:13:21.948273   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:13:21.957659   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:13:21.960739   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:13:21.969191   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:13:21.972356   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:13:21.981306   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:13:22.001469   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:13:22.021181   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:13:22.040587   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:13:22.060078   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:13:22.079285   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:13:22.098538   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:13:22.118296   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:13:22.137769   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:13:22.156929   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:13:22.176353   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:13:22.195510   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:13:22.209194   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:13:22.222827   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:13:22.236546   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:13:22.250070   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:13:22.263444   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:13:22.276970   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:13:22.290700   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:13:22.294935   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:13:22.304164   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307578   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307635   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.311940   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:13:22.320904   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:13:22.329872   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333271   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333318   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.337523   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:13:22.346681   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:13:22.355874   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359764   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359823   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.364168   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:13:22.373288   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:13:22.376713   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:13:22.381681   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:13:22.386495   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:13:22.390985   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:13:22.395318   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:13:22.399578   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:13:22.403998   20650 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:13:22.404052   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:13:22.404067   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:13:22.404115   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:13:22.417096   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:13:22.417139   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:13:22.417203   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:13:22.426058   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:13:22.426117   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:13:22.434774   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:13:22.448444   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:13:22.461910   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:13:22.475772   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:13:22.478602   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:22.487944   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.594180   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.608389   20650 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:13:22.608597   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:22.629533   20650 out.go:177] * Verifying Kubernetes components...
	I1105 10:13:22.671507   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.795219   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.807186   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:13:22.807391   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:13:22.807429   20650 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:13:22.807616   20650 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:22.807698   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:22.807704   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:22.807711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:22.807714   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.750948   20650 round_trippers.go:574] Response Status: 200 OK in 8943 milliseconds
	I1105 10:13:31.752572   20650 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:13:31.752585   20650 node_ready.go:38] duration metric: took 8.945035646s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:31.752614   20650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:31.752661   20650 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:13:31.752671   20650 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:13:31.752720   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:31.752727   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.752733   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.752738   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.802951   20650 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1105 10:13:31.809829   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.809889   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:13:31.809894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.809900   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.809904   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.814415   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:31.815355   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.815363   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.815369   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.815373   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.822380   20650 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:13:31.822662   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.822672   20650 pod_ready.go:82] duration metric: took 12.826683ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822679   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822728   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:13:31.822733   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.822739   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.822744   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.826328   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.826822   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.826831   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.826837   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.826841   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.829860   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.830181   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.830191   20650 pod_ready.go:82] duration metric: took 7.507226ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830198   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830235   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:13:31.830240   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.830245   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.830252   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.832219   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:31.832697   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.832706   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.832711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.832715   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.835276   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.835692   20650 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.835701   20650 pod_ready.go:82] duration metric: took 5.498306ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835709   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835747   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:13:31.835752   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.835758   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.835762   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.841537   20650 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:13:31.841973   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:31.841981   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.841986   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.841990   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844531   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.844869   20650 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.844879   20650 pod_ready.go:82] duration metric: took 9.164525ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844885   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844921   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:13:31.844926   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.844931   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844936   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.848600   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.952821   20650 request.go:632] Waited for 103.696334ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952860   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952865   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.952873   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.952877   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.955043   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:31.955226   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955236   20650 pod_ready.go:82] duration metric: took 110.346207ms for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:31.955242   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955257   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.153855   20650 request.go:632] Waited for 198.56381ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153901   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153906   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.153912   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.153915   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.156326   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.354721   20650 request.go:632] Waited for 197.883079ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354800   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354808   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.354816   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.354821   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.357314   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.357758   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.357771   20650 pod_ready.go:82] duration metric: took 402.50745ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.357779   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.554904   20650 request.go:632] Waited for 197.060501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555009   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555040   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.555059   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.555071   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.562819   20650 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 10:13:32.752788   20650 request.go:632] Waited for 189.599558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752820   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752825   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.752864   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.752870   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.755075   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.755378   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.755387   20650 pod_ready.go:82] duration metric: took 397.605979ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.755394   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.952787   20650 request.go:632] Waited for 197.357502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952836   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952842   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.952853   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.955636   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.153249   20650 request.go:632] Waited for 196.999871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153317   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153323   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.153329   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.153334   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.155712   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:33.155782   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155797   20650 pod_ready.go:82] duration metric: took 400.400564ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:33.155804   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155810   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.353944   20650 request.go:632] Waited for 198.075152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354021   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354033   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.354041   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.354047   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.356715   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.553130   20650 request.go:632] Waited for 196.01942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553198   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553204   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.553237   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.553242   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.555527   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.555890   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.555899   20650 pod_ready.go:82] duration metric: took 400.086552ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.555906   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.752845   20650 request.go:632] Waited for 196.894857ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752909   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752915   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.752921   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.752925   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.754805   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.953311   20650 request.go:632] Waited for 197.807461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953353   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953381   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.953389   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.953392   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.955376   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.955836   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.955846   20650 pod_ready.go:82] duration metric: took 399.938695ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.955855   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.153021   20650 request.go:632] Waited for 197.093812ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153060   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153065   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.153072   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.153075   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.155546   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:34.353423   20650 request.go:632] Waited for 197.340662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353457   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353463   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.353469   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.353472   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.355383   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.355495   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355514   20650 pod_ready.go:82] duration metric: took 399.657027ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.355524   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355532   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.553620   20650 request.go:632] Waited for 198.034445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553677   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553683   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.553689   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.553694   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.555564   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:34.753369   20650 request.go:632] Waited for 197.394131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753424   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753431   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.753436   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.753440   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.755363   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.755426   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755436   20650 pod_ready.go:82] duration metric: took 399.890345ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.755442   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755446   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.953531   20650 request.go:632] Waited for 198.038372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953615   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953624   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.953631   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.953636   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.955951   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.153813   20650 request.go:632] Waited for 196.981939ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153879   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.153903   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.153910   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.156466   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.157099   20650 pod_ready.go:93] pod "kube-proxy-m45pk" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.157109   20650 pod_ready.go:82] duration metric: took 401.65588ms for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.157117   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.354248   20650 request.go:632] Waited for 197.082179ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354294   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354302   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.354340   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.354347   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.357098   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.552778   20650 request.go:632] Waited for 195.237923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552882   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552910   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.552918   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.552923   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.555242   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.555725   20650 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.555734   20650 pod_ready.go:82] duration metric: took 398.615884ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.555748   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.752802   20650 request.go:632] Waited for 196.982082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752849   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752855   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.752861   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.752865   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.755216   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.953665   20650 request.go:632] Waited for 197.923503ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953733   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953742   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.953751   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.953758   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.955875   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.956268   20650 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.956277   20650 pod_ready.go:82] duration metric: took 400.526917ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.956283   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.153409   20650 request.go:632] Waited for 197.086533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153486   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153496   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.153504   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.153513   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.156474   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.354367   20650 request.go:632] Waited for 197.602225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354401   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354406   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.354421   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.354441   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.356601   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.356994   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.357004   20650 pod_ready.go:82] duration metric: took 400.718541ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.357011   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.554145   20650 request.go:632] Waited for 197.038016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554243   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554252   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.554264   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.554270   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.556774   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.753404   20650 request.go:632] Waited for 196.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753437   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753442   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.753448   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.753452   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.756764   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:36.757112   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.757122   20650 pod_ready.go:82] duration metric: took 400.109512ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.757130   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.953514   20650 request.go:632] Waited for 196.347448ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953546   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953558   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.953565   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.953575   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.955940   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.154619   20650 request.go:632] Waited for 198.194145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154663   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154669   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.154676   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.154695   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.157438   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:37.157524   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157535   20650 pod_ready.go:82] duration metric: took 400.40261ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:37.157542   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157547   20650 pod_ready.go:39] duration metric: took 5.404967892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:37.157569   20650 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:13:37.157646   20650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:13:37.171805   20650 api_server.go:72] duration metric: took 14.563521484s to wait for apiserver process to appear ...
	I1105 10:13:37.171821   20650 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:13:37.171836   20650 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:13:37.176463   20650 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:13:37.176507   20650 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:13:37.176512   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.176518   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.176523   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.177377   20650 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:13:37.177442   20650 api_server.go:141] control plane version: v1.31.2
	I1105 10:13:37.177460   20650 api_server.go:131] duration metric: took 5.62791ms to wait for apiserver health ...
	I1105 10:13:37.177467   20650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:13:37.352914   20650 request.go:632] Waited for 175.404088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352969   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352975   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.352982   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.352986   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.357439   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.362936   20650 system_pods.go:59] 26 kube-system pods found
	I1105 10:13:37.362960   20650 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.362964   20650 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.362968   20650 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.362973   20650 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.362980   20650 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.362984   20650 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.362987   20650 system_pods.go:61] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.362993   20650 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.362999   20650 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.363003   20650 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.363007   20650 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.363013   20650 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.363016   20650 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.363021   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.363024   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.363027   20650 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.363030   20650 system_pods.go:61] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.363036   20650 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 10:13:37.363040   20650 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.363042   20650 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.363046   20650 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.363051   20650 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.363055   20650 system_pods.go:61] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.363059   20650 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.363063   20650 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.363065   20650 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.363070   20650 system_pods.go:74] duration metric: took 185.599377ms to wait for pod list to return data ...
	I1105 10:13:37.363076   20650 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:13:37.554093   20650 request.go:632] Waited for 190.967335ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554130   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554138   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.554152   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.554156   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.557460   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:37.557594   20650 default_sa.go:45] found service account: "default"
	I1105 10:13:37.557604   20650 default_sa.go:55] duration metric: took 194.526347ms for default service account to be created ...
	I1105 10:13:37.557612   20650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:13:37.752842   20650 request.go:632] Waited for 195.185977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752875   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752881   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.752902   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.752907   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.757021   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.762493   20650 system_pods.go:86] 26 kube-system pods found
	I1105 10:13:37.762505   20650 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.762509   20650 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.762512   20650 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.762517   20650 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.762521   20650 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.762525   20650 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.762528   20650 system_pods.go:89] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.762532   20650 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.762535   20650 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.762539   20650 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.762543   20650 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.762548   20650 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.762551   20650 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.762557   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.762561   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.762566   20650 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.762569   20650 system_pods.go:89] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.762572   20650 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:13:37.762575   20650 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.762578   20650 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.762583   20650 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.762590   20650 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.762594   20650 system_pods.go:89] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.762596   20650 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.762600   20650 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.762602   20650 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.762607   20650 system_pods.go:126] duration metric: took 204.991818ms to wait for k8s-apps to be running ...
	I1105 10:13:37.762614   20650 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:13:37.762682   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:13:37.777110   20650 system_svc.go:56] duration metric: took 14.491738ms WaitForService to wait for kubelet
	I1105 10:13:37.777127   20650 kubeadm.go:582] duration metric: took 15.16885159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:13:37.777138   20650 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:13:37.952770   20650 request.go:632] Waited for 175.557407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952816   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952827   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.952839   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.955592   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.956364   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956379   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956390   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956393   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956397   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956399   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956403   20650 node_conditions.go:105] duration metric: took 179.263041ms to run NodePressure ...
	I1105 10:13:37.956411   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:13:37.956426   20650 start.go:255] writing updated cluster config ...
	I1105 10:13:37.978800   20650 out.go:201] 
	I1105 10:13:38.000237   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:38.000353   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.022912   20650 out.go:177] * Starting "ha-213000-m04" worker node in "ha-213000" cluster
	I1105 10:13:38.065816   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:13:38.065838   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:13:38.065942   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:13:38.065952   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:13:38.066024   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.066548   20650 start.go:360] acquireMachinesLock for ha-213000-m04: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:13:38.066601   20650 start.go:364] duration metric: took 39.836µs to acquireMachinesLock for "ha-213000-m04"
	I1105 10:13:38.066614   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:13:38.066619   20650 fix.go:54] fixHost starting: m04
	I1105 10:13:38.066839   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:38.066859   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:38.078183   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59062
	I1105 10:13:38.078511   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:38.078858   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:38.078877   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:38.079111   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:38.079203   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.079308   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:13:38.079392   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.079457   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:13:38.080557   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.080601   20650 fix.go:112] recreateIfNeeded on ha-213000-m04: state=Stopped err=<nil>
	I1105 10:13:38.080610   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	W1105 10:13:38.080695   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:13:38.101909   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m04" ...
	I1105 10:13:38.150121   20650 main.go:141] libmachine: (ha-213000-m04) Calling .Start
	I1105 10:13:38.150270   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.150297   20650 main.go:141] libmachine: (ha-213000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid
	I1105 10:13:38.151495   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.151504   20650 main.go:141] libmachine: (ha-213000-m04) DBG | pid 20571 is in state "Stopped"
	I1105 10:13:38.151536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid...
	I1105 10:13:38.151981   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Using UUID 70721578-92b7-4edc-935c-43ebcacd790c
	I1105 10:13:38.175524   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Generated MAC 1a:a3:f2:a5:2e:39
	I1105 10:13:38.175551   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:13:38.175756   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175805   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175883   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70721578-92b7-4edc-935c-43ebcacd790c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:13:38.175929   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70721578-92b7-4edc-935c-43ebcacd790c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:13:38.175943   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:13:38.177358   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Pid is 20690
	I1105 10:13:38.177760   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Attempt 0
	I1105 10:13:38.177775   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.177790   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:13:38.179817   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Searching for 1a:a3:f2:a5:2e:39 in /var/db/dhcpd_leases ...
	I1105 10:13:38.179881   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:13:38.179891   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:13:38.179930   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:13:38.179944   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:13:38.179961   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:13:38.179966   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found match: 1a:a3:f2:a5:2e:39
	I1105 10:13:38.179974   20650 main.go:141] libmachine: (ha-213000-m04) DBG | IP: 192.169.0.8
	I1105 10:13:38.180001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetConfigRaw
	I1105 10:13:38.180736   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:38.180968   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.181459   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:13:38.181471   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.181605   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:38.181707   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:38.181828   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.181929   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.182026   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:38.182165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:38.182315   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:38.182325   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:13:38.188897   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:13:38.198428   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:13:38.199856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.199886   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.199916   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.199953   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.594841   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:13:38.594856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:13:38.709716   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.709736   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.709743   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.709759   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.710592   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:13:38.710604   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:13:44.475519   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:13:44.475536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:13:44.475546   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:13:44.498793   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:49.237329   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:49.237349   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237488   20650 buildroot.go:166] provisioning hostname "ha-213000-m04"
	I1105 10:13:49.237500   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237590   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.237684   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.237765   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237842   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237935   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.238078   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.238220   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.238229   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m04 && echo "ha-213000-m04" | sudo tee /etc/hostname
	I1105 10:13:49.297417   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m04
	
	I1105 10:13:49.297437   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.297576   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.297673   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297757   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297853   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.297997   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.298162   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.298173   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:49.354308   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:49.354323   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:49.354341   20650 buildroot.go:174] setting up certificates
	I1105 10:13:49.354349   20650 provision.go:84] configureAuth start
	I1105 10:13:49.354357   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.354507   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:49.354606   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.354711   20650 provision.go:143] copyHostCerts
	I1105 10:13:49.354741   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354793   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:49.354799   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354909   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:49.355124   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355155   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:49.355159   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355228   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:49.355419   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355454   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:49.355461   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355528   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:49.355690   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m04 san=[127.0.0.1 192.169.0.8 ha-213000-m04 localhost minikube]
	I1105 10:13:49.396705   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:49.396767   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:49.396780   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.396910   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.397015   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.397117   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.397221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:49.427813   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:49.427885   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:49.447457   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:49.447518   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:49.467286   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:49.467359   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:13:49.487192   20650 provision.go:87] duration metric: took 132.83626ms to configureAuth
	I1105 10:13:49.487209   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:49.487380   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:49.487394   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:49.487531   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.487631   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.487715   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487801   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487890   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.488033   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.488154   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.488162   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:49.537465   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:49.537478   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:49.537561   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:49.537571   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.537704   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.537799   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537884   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537998   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.538165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.538298   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.538345   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:49.598479   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:49.598502   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.598649   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.598747   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598833   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598947   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.599089   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.599234   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.599246   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:51.207763   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:51.207782   20650 machine.go:96] duration metric: took 13.026432223s to provisionDockerMachine
	I1105 10:13:51.207792   20650 start.go:293] postStartSetup for "ha-213000-m04" (driver="hyperkit")
	I1105 10:13:51.207801   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:51.207816   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.208031   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:51.208047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.208140   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.208231   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.208318   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.208438   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.241123   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:51.244240   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:51.244251   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:51.244336   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:51.244477   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:51.244484   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:51.244646   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:51.252753   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:51.271782   20650 start.go:296] duration metric: took 63.980744ms for postStartSetup
	I1105 10:13:51.271803   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.271989   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:51.272001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.272093   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.272178   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.272277   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.272371   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.304392   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:51.304469   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:51.358605   20650 fix.go:56] duration metric: took 13.292102469s for fixHost
	I1105 10:13:51.358630   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.358783   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.358880   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.358963   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.359053   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.359195   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:51.359329   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:51.359336   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:51.407868   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830431.709090009
	
	I1105 10:13:51.407885   20650 fix.go:216] guest clock: 1730830431.709090009
	I1105 10:13:51.407890   20650 fix.go:229] Guest: 2024-11-05 10:13:51.709090009 -0800 PST Remote: 2024-11-05 10:13:51.35862 -0800 PST m=+89.911326584 (delta=350.470009ms)
	I1105 10:13:51.407901   20650 fix.go:200] guest clock delta is within tolerance: 350.470009ms
	I1105 10:13:51.407906   20650 start.go:83] releasing machines lock for "ha-213000-m04", held for 13.34141889s
	I1105 10:13:51.407923   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.408055   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:51.430524   20650 out.go:177] * Found network options:
	I1105 10:13:51.451633   20650 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:13:51.472140   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.472164   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.472179   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472739   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472888   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.473015   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1105 10:13:51.473025   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.473039   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.473047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473124   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:51.473137   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473175   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473286   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473299   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473387   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473400   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473487   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.473517   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473599   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	W1105 10:13:51.501432   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:51.501515   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:51.553972   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:51.553993   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.554083   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.569365   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:51.577607   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:51.586014   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:51.586084   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:51.594293   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.602646   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:51.610969   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.619400   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:51.627741   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:51.635982   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:51.645401   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:51.653565   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:51.660899   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:51.660963   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:51.669419   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:51.677143   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:51.772664   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:51.792178   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.792270   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:51.808083   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.820868   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:51.842221   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.854583   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.865539   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:51.892869   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.904042   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.922494   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:51.928520   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:51.945780   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:51.962437   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:52.060460   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:52.163232   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:52.163260   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:52.178328   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:52.296397   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:14:53.349067   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016016812s)
	I1105 10:14:53.349159   20650 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:14:53.385876   20650 out.go:201] 
	W1105 10:14:53.422606   20650 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:14:53.422674   20650 out.go:270] * 
	W1105 10:14:53.423462   20650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:14:53.533703   20650 out.go:201] 
	
	
	==> Docker <==
	Nov 05 18:14:24 ha-213000 cri-dockerd[1411]: time="2024-11-05T18:14:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.320957280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321014942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321032889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321144470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358583815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358913638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358923588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.359308274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371019459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371180579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371195366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371264075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384883251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384945765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384958316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.385102977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.393595106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396454919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396464389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396559087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:54 ha-213000 dockerd[1151]: time="2024-11-05T18:14:54.321538330Z" level=info msg="ignoring event" container=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322187590Z" level=info msg="shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322448589Z" level=warning msg="cleaning up after shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322490228Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	568ed995df15d       8c811b4aec35f       35 seconds ago       Running             busybox                   2                   f5d092375dddf       busybox-7dff88458-q5j74
	a54d96a8e9e4d       9ca7e41918271       35 seconds ago       Running             kindnet-cni               2                   07702f76ce639       kindnet-hppzk
	820b778421b38       c69fa2e9cbf5f       35 seconds ago       Running             coredns                   2                   bc67a22cb5eff       coredns-7c65d6cfc9-cv2cc
	ca9011bea4440       c69fa2e9cbf5f       35 seconds ago       Running             coredns                   2                   703f8fe612ac5       coredns-7c65d6cfc9-q96rw
	85e7cccdf4831       505d571f5fd56       35 seconds ago       Running             kube-proxy                2                   7a4f7e3a95ced       kube-proxy-s8xxj
	ea27059bb8dad       6e38f40d628db       36 seconds ago       Exited              storage-provisioner       4                   7a18da25cf537       storage-provisioner
	43950f04c89aa       0486b6c53a1b5       About a minute ago   Running             kube-controller-manager   4                   3c4a95766d8df       kube-controller-manager-ha-213000
	8e0c0916fca71       9499c9960544e       About a minute ago   Running             kube-apiserver            4                   f2454c695936e       kube-apiserver-ha-213000
	897300e44633b       baf03d14a86fd       2 minutes ago        Running             kube-vip                  1                   f00a17fab8835       kube-vip-ha-213000
	ad7975173845f       847c7bc1a5418       2 minutes ago        Running             kube-scheduler            2                   5162e28d0e03d       kube-scheduler-ha-213000
	8a28e20a2bf3d       2e96e5913fc06       2 minutes ago        Running             etcd                      2                   acdca4d26c9f6       etcd-ha-213000
	ea0b432d94423       0486b6c53a1b5       2 minutes ago        Exited              kube-controller-manager   3                   3c4a95766d8df       kube-controller-manager-ha-213000
	16b5e8baed219       9499c9960544e       2 minutes ago        Exited              kube-apiserver            3                   f2454c695936e       kube-apiserver-ha-213000
	96799b06e508f       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   07d926acb1a6e       busybox-7dff88458-q5j74
	86ef547964bcb       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   5fe3e01a4f33a       coredns-7c65d6cfc9-q96rw
	dd08019aca606       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   00f7c155eb4b0       coredns-7c65d6cfc9-cv2cc
	4aec0d02658e0       505d571f5fd56       4 minutes ago        Exited              kube-proxy                1                   1ece5e2bcaf09       kube-proxy-s8xxj
	f9a05b099e4ee       9ca7e41918271       4 minutes ago        Exited              kindnet-cni               1                   fd311d6ed9c5c       kindnet-hppzk
	51c2df7fc859d       baf03d14a86fd       5 minutes ago        Exited              kube-vip                  0                   98323683c9082       kube-vip-ha-213000
	bdbc1a6e54924       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   474c9f706901d       etcd-ha-213000
	f1607d6ea7a30       847c7bc1a5418       5 minutes ago        Exited              kube-scheduler            1                   b217215a9cf0c       kube-scheduler-ha-213000
	
	
	==> coredns [820b778421b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59240 - 59060 "HINFO IN 4329632244317726903.7890662898760833477. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011788676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[675101378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.641) (total time: 30001ms):
	Trace[675101378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.641)
	Trace[675101378]: [30.00107355s] [30.00107355s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[792881874]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30001ms):
	Trace[792881874]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:14:54.642)
	Trace[792881874]: [30.001711346s] [30.001711346s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[34248386]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[34248386]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[34248386]: [30.000366606s] [30.000366606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [86ef547964bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33774 - 54633 "HINFO IN 1409488340311598538.4125883895955909161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004156009s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1322590960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1322590960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.870)
	Trace[1322590960]: [30.003129161s] [30.003129161s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1548400132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30002ms):
	Trace[1548400132]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:11:00.870)
	Trace[1548400132]: [30.002952972s] [30.002952972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1633349832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.870) (total time: 30002ms):
	Trace[1633349832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[1633349832]: [30.002091533s] [30.002091533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca9011bea444] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47030 - 28453 "HINFO IN 9030478600017221968.7137590874178245370. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011696462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[954770416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30002ms):
	Trace[954770416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:14:54.642)
	Trace[954770416]: [30.002259073s] [30.002259073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1172241105]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1172241105]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[1172241105]: [30.000198867s] [30.000198867s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1149531028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1149531028]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.645)
	Trace[1149531028]: [30.000272321s] [30.000272321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [dd08019aca60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56311 - 34269 "HINFO IN 2200850437967647570.948968209837946997. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0110095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819586440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30001ms):
	Trace[819586440]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:11:00.870)
	Trace[819586440]: [30.001860838s] [30.001860838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[58172056]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.869) (total time: 30000ms):
	Trace[58172056]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[58172056]: [30.000759284s] [30.000759284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1700347832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1700347832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.871)
	Trace[1700347832]: [30.003960758s] [30.003960758s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:14:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1892e4225dd5499cb35e29ff753a0c40
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    872d5ac1-d893-413e-b883-f1ad425b7c82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 35s                    kube-proxy       
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  Starting                 5m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           84s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dc248d7debd421bb4108dc092da24e0
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    8a40793c-3b3c-49c9-a112-66a753c3fa07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  Starting                 4m48s                kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           12m                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             8m45s                node-controller  Node ha-213000-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m53s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           4m52s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           4m10s                node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)    kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           85s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:11:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 efb6d3b228624c8f9582b78a04751815
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    6405d175-8027-4e75-bb1e-1845fbf67784
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-28tbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 kindnet-p4bx6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-m45pk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m53s                  kube-proxy       
	  Normal   Starting                 3m16s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           9m59s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeReady                9m39s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m53s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           4m52s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             4m13s                  node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m18s (x2 over 3m18s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m18s (x2 over 3m18s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m18s (x2 over 3m18s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m18s                  kubelet          Node ha-213000-m04 has been rebooted, boot id: 6405d175-8027-4e75-bb1e-1845fbf67784
	  Normal   NodeReady                3m18s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           85s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           85s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             45s                    node-controller  Node ha-213000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036175] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007972] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.844917] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000007] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006614] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702887] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.233657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.342806] systemd-fstab-generator[457]: Ignoring "noauto" option for root device
	[  +0.102790] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +2.007272] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.269734] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.085327] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.060857] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.057582] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.475879] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.104726] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.119211] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.130514] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.455084] systemd-fstab-generator[1568]: Ignoring "noauto" option for root device
	[  +6.862189] kauditd_printk_skb: 190 callbacks suppressed
	[Nov 5 18:13] kauditd_printk_skb: 40 callbacks suppressed
	[Nov 5 18:14] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8a28e20a2bf3] <==
	{"level":"info","ts":"2024-11-05T18:13:31.135398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from 585aaf1d56a73c02 at term 3"}
	{"level":"info","ts":"2024-11-05T18:13:31.135413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-11-05T18:13:31.135422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.135426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.135442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 3001] sent MsgVote request to 585aaf1d56a73c02 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from 585aaf1d56a73c02 at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-11-05T18:13:31.139678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-11-05T18:13:31.139699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"warn","ts":"2024-11-05T18:13:31.139920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.668851654s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-11-05T18:13:31.139942Z","caller":"traceutil/trace.go:171","msg":"trace[1810206807] range","detail":"{range_begin:; range_end:; }","duration":"1.668918965s","start":"2024-11-05T18:13:29.471018Z","end":"2024-11-05T18:13:31.139937Z","steps":["trace[1810206807] 'agreement among raft nodes before linearized reading'  (duration: 1.668850533s)"],"step_count":1}
	{"level":"error","ts":"2024-11-05T18:13:31.139988Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-11-05T18:13:31.146507Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-213000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T18:13:31.146769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:13:31.147253Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T18:13:31.148572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:13:31.149600Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-11-05T18:13:31.149813Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T18:13:31.149866Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-11-05T18:13:31.148984Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T18:13:31.150885Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-11-05T18:13:31.153408Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-11-05T18:13:31.155499Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-11-05T18:13:31.156813Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-11-05T18:14:55.084484Z","caller":"traceutil/trace.go:171","msg":"trace[689855107] transaction","detail":"{read_only:false; response_revision:2931; number_of_response:1; }","duration":"110.3233ms","start":"2024-11-05T18:14:54.974150Z","end":"2024-11-05T18:14:55.084473Z","steps":["trace[689855107] 'process raft request'  (duration: 110.263526ms)"],"step_count":1}
	
	
	==> etcd [bdbc1a6e5492] <==
	{"level":"warn","ts":"2024-11-05T18:12:13.699058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:09.275669Z","time spent":"4.423385981s","remote":"127.0.0.1:52268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:13.283499Z","time spent":"415.604721ms","remote":"127.0.0.1:52350","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.487277082s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699158Z","caller":"traceutil/trace.go:171","msg":"trace[1772748615] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"7.487289106s","start":"2024-11-05T18:12:06.211867Z","end":"2024-11-05T18:12:13.699156Z","steps":["trace[1772748615] 'agreement among raft nodes before linearized reading'  (duration: 7.487277083s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699169Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:06.211838Z","time spent":"7.487327421s","remote":"127.0.0.1:52456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.037776693s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699221Z","caller":"traceutil/trace.go:171","msg":"trace[763418090] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"2.037787826s","start":"2024-11-05T18:12:11.661430Z","end":"2024-11-05T18:12:13.699218Z","steps":["trace[763418090] 'agreement among raft nodes before linearized reading'  (duration: 2.037776524s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:11.661414Z","time spent":"2.03781384s","remote":"127.0.0.1:52228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.734339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-05T18:12:13.734385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-11-05T18:12:13.734444Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-11-05T18:12:13.734706Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734723Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734820Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734866Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.735810Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735871Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735879Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-213000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 18:15:00 up 2 min,  0 users,  load average: 0.09, 0.10, 0.04
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a54d96a8e9e4] <==
	I1105 18:14:25.104544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1105 18:14:25.791429       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	add table inet kindnet-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I1105 18:14:35.800964       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:35.801150       1 main.go:301] handling current node
	I1105 18:14:35.801980       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:35.802041       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:35.802606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.6 Flags: [] Table: 0 Realm: 0} 
	I1105 18:14:35.802866       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:35.802935       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:14:35.804797       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.8 Flags: [] Table: 0 Realm: 0} 
	I1105 18:14:45.792345       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:45.792414       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:45.792632       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:45.792668       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:14:45.792764       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:45.792808       1 main.go:301] handling current node
	I1105 18:14:55.801709       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:14:55.801907       1 main.go:301] handling current node
	I1105 18:14:55.801962       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:14:55.801980       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:14:55.802165       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:14:55.802236       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f9a05b099e4e] <==
	I1105 18:11:41.574590       1 main.go:301] handling current node
	I1105 18:11:41.574599       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:41.574604       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:41.574749       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:41.574789       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567175       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:11:51.567282       1 main.go:301] handling current node
	I1105 18:11:51.567311       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:51.567325       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:51.567514       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:51.567574       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567879       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:11:51.567959       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:01.566316       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:01.566340       1 main.go:301] handling current node
	I1105 18:12:01.566353       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:01.566358       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:01.566565       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:01.566573       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:11.571151       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:11.571336       1 main.go:301] handling current node
	I1105 18:12:11.571478       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:11.571602       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:11.572596       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:11.572626       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [16b5e8baed21] <==
	I1105 18:12:47.610850       1 options.go:228] external host was not specified, using 192.169.0.5
	I1105 18:12:47.613755       1 server.go:142] Version: v1.31.2
	I1105 18:12:47.614011       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.895871       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:12:48.898884       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:12:48.901520       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:12:48.901573       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:12:48.902234       1 instance.go:232] Using reconciler: lease
	W1105 18:13:08.892813       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1105 18:13:08.896286       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1105 18:13:08.903685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W1105 18:13:08.903693       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [8e0c0916fca7] <==
	I1105 18:13:32.048504       1 establishing_controller.go:81] Starting EstablishingController
	I1105 18:13:32.048599       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1105 18:13:32.048646       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1105 18:13:32.048673       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1105 18:13:32.111932       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:13:32.112352       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:13:32.112415       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:13:32.112712       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:13:32.112790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:13:32.115714       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:13:32.115760       1 policy_source.go:224] refreshing policies
	I1105 18:13:32.115832       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:13:32.118673       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:13:32.126538       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:13:32.129328       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:13:32.136801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:13:32.137650       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:13:32.137679       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:13:32.137683       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:13:32.137688       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:13:32.144136       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1105 18:13:32.162460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1105 18:13:33.018201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:13:33.274965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:14:23.399590       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [43950f04c89a] <==
	I1105 18:14:15.564177       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5ldvg"
	I1105 18:14:15.564353       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-213000-m03"
	I1105 18:14:15.565183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.411µs"
	I1105 18:14:15.590695       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-213000-m03"
	I1105 18:14:15.590731       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-trfhn"
	I1105 18:14:15.610087       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-trfhn"
	I1105 18:14:15.610123       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-213000-m03"
	E1105 18:14:15.613786       1 gc_controller.go:255] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4589347d-3131-41ad-822d-d41f3e03a634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2024-11-05T18:14:15Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-vip-ha-213000-m03\": pods \"kube-vip-ha-213000-m03\" not found" logger="UnhandledError"
	I1105 18:14:15.615307       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-213000-m03"
	I1105 18:14:15.635144       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-213000-m03"
	I1105 18:14:20.621696       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:14:23.416708       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2rcm6\": the object has been modified; please apply your changes to the latest version and try again"
	I1105 18:14:23.416951       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"eea44333-75c8-4ade-8223-0ee24b6f9ab0", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2rcm6": the object has been modified; please apply your changes to the latest version and try again
	I1105 18:14:23.435993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="56.128077ms"
	I1105 18:14:23.436289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.743µs"
	I1105 18:14:23.503484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.726592ms"
	I1105 18:14:23.503948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.593µs"
	I1105 18:14:23.564006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.467814ms"
	I1105 18:14:23.564310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.074µs"
	I1105 18:14:25.752475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.649291ms"
	I1105 18:14:25.752678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.633µs"
	I1105 18:14:25.765769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.252µs"
	I1105 18:14:25.785523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.091µs"
	I1105 18:14:25.792738       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-2rcm6\": the object has been modified; please apply your changes to the latest version and try again"
	I1105 18:14:25.793122       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"eea44333-75c8-4ade-8223-0ee24b6f9ab0", APIVersion:"v1", ResourceVersion:"300", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-2rcm6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-2rcm6": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [ea0b432d9442] <==
	I1105 18:12:48.246520       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:12:48.777745       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:12:48.777814       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.783136       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:12:48.783574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:12:48.783729       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:12:48.783931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1105 18:13:09.910735       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [4aec0d02658e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:10:30.967416       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:10:30.985864       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:10:30.985986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:10:31.019992       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:10:31.020085       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:10:31.020128       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:10:31.022301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:10:31.022843       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:10:31.022888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:10:31.026969       1 config.go:199] "Starting service config controller"
	I1105 18:10:31.027078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:10:31.027666       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:10:31.027692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:10:31.028138       1 config.go:328] "Starting node config controller"
	I1105 18:10:31.028170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:10:31.130453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:10:31.130459       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:10:31.130467       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [85e7cccdf483] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:14:24.812805       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:14:24.832536       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:14:24.832803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:14:24.864245       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:14:24.864284       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:14:24.864314       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:14:24.866476       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:14:24.868976       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:14:24.869009       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:14:24.872199       1 config.go:199] "Starting service config controller"
	I1105 18:14:24.872427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:14:24.872629       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:14:24.872656       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:14:24.874721       1 config.go:328] "Starting node config controller"
	I1105 18:14:24.874748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:14:24.974138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:14:24.974427       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:14:24.975147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad7975173845] <==
	W1105 18:13:17.072213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.072242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.177384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.177607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.472456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.472508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.646303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.646354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.851021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.851072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:18.674193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:18.674222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.133550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.133602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.167612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.167767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.410336       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.410541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.515934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.516006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.540843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.540926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.825617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.825717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I1105 18:13:32.157389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1607d6ea7a3] <==
	W1105 18:10:03.671887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 18:10:03.671970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 18:10:03.672285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:10:03.672503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.672829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672954       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 18:10:03.673005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:10:03.673161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 18:10:03.673298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.673427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.703301       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 18:10:03.703348       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1105 18:10:27.397168       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:11:49.191240       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	E1105 18:11:49.193101       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	I1105 18:11:49.193402       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-28tbv" node="ha-213000-m04"
	I1105 18:12:13.753881       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1105 18:12:13.756404       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1105 18:12:13.756765       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.440521    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.541552    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.641846    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.742792    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.844458    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:22 ha-213000 kubelet[1575]: E1105 18:14:22.945965    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: E1105 18:14:23.047096    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.299353    1575 apiserver.go:52] "Watching apiserver"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.401536    1575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.426959    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-xtables-lock\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427025    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-cni-cfg\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427041    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-lib-modules\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427052    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e7f00930-b382-473c-be59-04504c6e23ff-tmp\") pod \"storage-provisioner\" (UID: \"e7f00930-b382-473c-be59-04504c6e23ff\") " pod="kube-system/storage-provisioner"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427090    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-xtables-lock\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427103    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-lib-modules\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.446313    1575 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 05 18:14:24 ha-213000 kubelet[1575]: I1105 18:14:24.613521    1575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b"
	Nov 05 18:14:40 ha-213000 kubelet[1575]: E1105 18:14:40.279613    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971252    1575 scope.go:117] "RemoveContainer" containerID="6668904ee766d56b8d55ddf5af906befaf694e0933fdf7c8fdb3b42a676d0fb3"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971818    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: E1105 18:14:54.971979    1575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e7f00930-b382-473c-be59-04504c6e23ff)\"" pod="kube-system/storage-provisioner" podUID="e7f00930-b382-473c-be59-04504c6e23ff"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-213000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-213000 --control-plane -v=7 --alsologtostderr: (1m15.113287887s)
ha_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 2 (504.349286ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	
	ha-213000-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:16:17.049301   20805 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:16:17.050000   20805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:16:17.050007   20805 out.go:358] Setting ErrFile to fd 2...
	I1105 10:16:17.050011   20805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:16:17.050188   20805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:16:17.050393   20805 out.go:352] Setting JSON to false
	I1105 10:16:17.050417   20805 mustload.go:65] Loading cluster: ha-213000
	I1105 10:16:17.050470   20805 notify.go:220] Checking for updates...
	I1105 10:16:17.050807   20805 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:16:17.050831   20805 status.go:174] checking status of ha-213000 ...
	I1105 10:16:17.051295   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.051352   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.062692   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59200
	I1105 10:16:17.063052   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.063492   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.063505   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.063768   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.063877   20805 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:16:17.063960   20805 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:16:17.064045   20805 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:16:17.065209   20805 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:16:17.065226   20805 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:16:17.065486   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.065510   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.080830   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59202
	I1105 10:16:17.081162   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.081493   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.081506   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.081733   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.081838   20805 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:16:17.081955   20805 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:16:17.082217   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.082242   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.093087   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59204
	I1105 10:16:17.093411   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.093832   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.093853   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.094082   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.094176   20805 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:16:17.094344   20805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:16:17.094365   20805 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:16:17.094452   20805 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:16:17.094540   20805 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:16:17.094629   20805 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:16:17.094713   20805 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:16:17.126209   20805 ssh_runner.go:195] Run: systemctl --version
	I1105 10:16:17.130815   20805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:16:17.143568   20805 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:16:17.143594   20805 api_server.go:166] Checking apiserver status ...
	I1105 10:16:17.143651   20805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:16:17.155825   20805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2387/cgroup
	W1105 10:16:17.164635   20805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2387/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:16:17.165168   20805 ssh_runner.go:195] Run: ls
	I1105 10:16:17.168399   20805 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:16:17.172553   20805 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:16:17.172566   20805 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:16:17.172575   20805 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:16:17.172587   20805 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:16:17.172886   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.172908   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.184326   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59208
	I1105 10:16:17.184670   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.185029   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.185043   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.185266   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.185374   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:16:17.185465   20805 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:16:17.185555   20805 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:16:17.186713   20805 status.go:371] ha-213000-m02 host status = "Running" (err=<nil>)
	I1105 10:16:17.186722   20805 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:16:17.186992   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.187016   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.198158   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59210
	I1105 10:16:17.198471   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.198821   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.198838   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.199089   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.199206   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:16:17.199305   20805 host.go:66] Checking if "ha-213000-m02" exists ...
	I1105 10:16:17.199570   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.199599   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.210566   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59212
	I1105 10:16:17.210890   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.211198   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.211209   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.211444   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.211554   20805 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:16:17.211717   20805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:16:17.211729   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:16:17.211806   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:16:17.211884   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:16:17.211969   20805 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:16:17.212050   20805 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:16:17.248742   20805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:16:17.263056   20805 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:16:17.263071   20805 api_server.go:166] Checking apiserver status ...
	I1105 10:16:17.263122   20805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:16:17.274472   20805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2013/cgroup
	W1105 10:16:17.281935   20805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2013/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:16:17.281996   20805 ssh_runner.go:195] Run: ls
	I1105 10:16:17.285150   20805 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:16:17.288364   20805 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:16:17.288376   20805 status.go:463] ha-213000-m02 apiserver status = Running (err=<nil>)
	I1105 10:16:17.288381   20805 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:16:17.288391   20805 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:16:17.288667   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.288688   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.299877   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59216
	I1105 10:16:17.300217   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.300588   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.300604   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.300828   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.300928   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:16:17.301022   20805 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:16:17.301110   20805 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:16:17.302295   20805 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:16:17.302303   20805 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:16:17.302563   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.302592   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.313681   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59218
	I1105 10:16:17.314007   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.314332   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.314342   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.314579   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.314709   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:16:17.314813   20805 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:16:17.315085   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.315109   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.327009   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59220
	I1105 10:16:17.327339   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.327699   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.327716   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.327929   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.328054   20805 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:16:17.328240   20805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:16:17.328258   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:16:17.328350   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:16:17.328432   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:16:17.328523   20805 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:16:17.328612   20805 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:16:17.356866   20805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:16:17.368362   20805 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:16:17.368382   20805 status.go:174] checking status of ha-213000-m05 ...
	I1105 10:16:17.368936   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.368965   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.380026   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59223
	I1105 10:16:17.380352   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.380698   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.380707   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.380924   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.381019   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetState
	I1105 10:16:17.381110   20805 main.go:141] libmachine: (ha-213000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:16:17.381203   20805 main.go:141] libmachine: (ha-213000-m05) DBG | hyperkit pid from json: 20765
	I1105 10:16:17.382429   20805 status.go:371] ha-213000-m05 host status = "Running" (err=<nil>)
	I1105 10:16:17.382438   20805 host.go:66] Checking if "ha-213000-m05" exists ...
	I1105 10:16:17.382699   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.382730   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.394239   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59225
	I1105 10:16:17.394567   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.394920   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.394938   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.395162   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.395268   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetIP
	I1105 10:16:17.395376   20805 host.go:66] Checking if "ha-213000-m05" exists ...
	I1105 10:16:17.395671   20805 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:16:17.395703   20805 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:16:17.407071   20805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59227
	I1105 10:16:17.407445   20805 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:16:17.407834   20805 main.go:141] libmachine: Using API Version  1
	I1105 10:16:17.407853   20805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:16:17.408090   20805 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:16:17.408200   20805 main.go:141] libmachine: (ha-213000-m05) Calling .DriverName
	I1105 10:16:17.408349   20805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:16:17.408361   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetSSHHostname
	I1105 10:16:17.408437   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetSSHPort
	I1105 10:16:17.408520   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetSSHKeyPath
	I1105 10:16:17.408601   20805 main.go:141] libmachine: (ha-213000-m05) Calling .GetSSHUsername
	I1105 10:16:17.408676   20805 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m05/id_rsa Username:docker}
	I1105 10:16:17.440861   20805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:16:17.453752   20805 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:16:17.453767   20805 api_server.go:166] Checking apiserver status ...
	I1105 10:16:17.453821   20805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:16:17.465313   20805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup
	W1105 10:16:17.473679   20805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1951/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:16:17.473753   20805 ssh_runner.go:195] Run: ls
	I1105 10:16:17.476913   20805 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:16:17.480061   20805 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:16:17.480072   20805 status.go:463] ha-213000-m05 apiserver status = Running (err=<nil>)
	I1105 10:16:17.480077   20805 status.go:176] ha-213000-m05 status: &{Name:ha-213000-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:615: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (3.382981991s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000 -v=7                                                                                                       | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-213000 -v=7                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:08 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST |                     |
	| node    | ha-213000 node delete m03 -v=7                                                                                               | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-213000 stop -v=7                                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:12 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true                                                                                                     | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:12 PST |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-213000                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:15 PST | 05 Nov 24 10:16 PST |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:12:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:12:21.490688   20650 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.490996   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491002   20650 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.491006   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491183   20650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.492670   20650 out.go:352] Setting JSON to false
	I1105 10:12:21.523908   20650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7910,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:12:21.523997   20650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:12:21.546247   20650 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:12:21.588131   20650 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:12:21.588174   20650 notify.go:220] Checking for updates...
	I1105 10:12:21.632868   20650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:21.654057   20650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:12:21.674788   20650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:12:21.696036   20650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:12:21.717022   20650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:12:21.738560   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.739289   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.739362   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.752070   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59007
	I1105 10:12:21.752427   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.752834   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.752843   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.753115   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.753236   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.753425   20650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:12:21.753684   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.753710   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.764480   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59009
	I1105 10:12:21.764817   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.765142   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.765158   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.765399   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.765513   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.796815   20650 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:12:21.838784   20650 start.go:297] selected driver: hyperkit
	I1105 10:12:21.838816   20650 start.go:901] validating driver "hyperkit" against &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.839082   20650 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:12:21.839288   20650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.839546   20650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:12:21.851704   20650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:12:21.858679   20650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.858708   20650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:12:21.864360   20650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:12:21.864394   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:21.864431   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:21.864510   20650 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.864624   20650 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.886086   20650 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:12:21.927848   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:21.927921   20650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:12:21.927965   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:21.928204   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:21.928223   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:21.928393   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:21.929303   20650 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:21.929483   20650 start.go:364] duration metric: took 156.606µs to acquireMachinesLock for "ha-213000"
	I1105 10:12:21.929515   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:21.929530   20650 fix.go:54] fixHost starting: 
	I1105 10:12:21.929991   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.930022   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.941843   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59011
	I1105 10:12:21.942146   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.942523   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.942539   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.942770   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.942869   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.942962   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.943046   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.943124   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.944238   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.944273   20650 fix.go:112] recreateIfNeeded on ha-213000: state=Stopped err=<nil>
	I1105 10:12:21.944288   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	W1105 10:12:21.944375   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:21.965704   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000" ...
	I1105 10:12:21.986830   20650 main.go:141] libmachine: (ha-213000) Calling .Start
	I1105 10:12:21.986975   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.987000   20650 main.go:141] libmachine: (ha-213000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:12:21.988429   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.988437   20650 main.go:141] libmachine: (ha-213000) DBG | pid 20508 is in state "Stopped"
	I1105 10:12:21.988449   20650 main.go:141] libmachine: (ha-213000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid...
	I1105 10:12:21.988605   20650 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:12:22.098530   20650 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:12:22.098573   20650 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:22.098772   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098813   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098872   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:22.098916   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:22.098942   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:22.100556   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Pid is 20664
	I1105 10:12:22.101143   20650 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:12:22.101159   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:22.101260   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:12:22.103059   20650 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:12:22.103211   20650 main.go:141] libmachine: (ha-213000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:22.103230   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:22.103244   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:22.103282   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:22.103300   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6d37}
	I1105 10:12:22.103320   20650 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:12:22.103326   20650 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:12:22.103333   20650 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:12:22.104301   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:22.104508   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:22.104940   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:22.104951   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:22.105084   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:22.105206   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:22.105334   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105499   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105662   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:22.106057   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:22.106277   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:22.106287   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:22.111841   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:22.167275   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:22.168436   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.168488   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.168505   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.168538   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.563375   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:22.563390   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:22.678087   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.678107   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.678118   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.678127   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.678997   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:22.679010   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:28.419344   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:28.419383   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:28.419395   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:28.443700   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:12:33.165174   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:12:33.165187   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165353   20650 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:12:33.165363   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165462   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.165555   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.165665   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165766   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.166032   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.166168   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.166176   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:12:33.233946   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:12:33.233965   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.234107   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.234213   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234303   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234419   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.234574   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.234722   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.234733   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:12:33.296276   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:12:33.296296   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:12:33.296314   20650 buildroot.go:174] setting up certificates
	I1105 10:12:33.296331   20650 provision.go:84] configureAuth start
	I1105 10:12:33.296340   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.296489   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:33.296589   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.296674   20650 provision.go:143] copyHostCerts
	I1105 10:12:33.296705   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296779   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:12:33.296787   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296976   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:12:33.297202   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297251   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:12:33.297256   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297953   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:12:33.298150   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298199   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:12:33.298205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298290   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:12:33.298468   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:12:33.417814   20650 provision.go:177] copyRemoteCerts
	I1105 10:12:33.417886   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:12:33.417904   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.418044   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.418142   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.418231   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.418333   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:33.452233   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:12:33.452305   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 10:12:33.471837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:12:33.471904   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:12:33.491510   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:12:33.491572   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:12:33.511221   20650 provision.go:87] duration metric: took 214.877215ms to configureAuth
	I1105 10:12:33.511235   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:12:33.511399   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:33.511412   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:33.511554   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.511653   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.511767   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511859   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511944   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.512074   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.512201   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.512209   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:12:33.567448   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:12:33.567460   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:12:33.567540   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:12:33.567552   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.567685   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.567782   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567875   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567957   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.568105   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.568243   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.568289   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:12:33.633746   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:12:33.633770   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.633912   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.634017   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634113   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634221   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.634373   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.634523   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.634538   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:12:35.361033   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:12:35.361047   20650 machine.go:96] duration metric: took 13.256219662s to provisionDockerMachine
	I1105 10:12:35.361058   20650 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:12:35.361081   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:12:35.361095   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.361306   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:12:35.361323   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.361415   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.361506   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.361580   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.361669   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.396970   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:12:35.400946   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:12:35.400961   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:12:35.401074   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:12:35.401496   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:12:35.401503   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:12:35.401766   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:12:35.411536   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:35.443784   20650 start.go:296] duration metric: took 82.704716ms for postStartSetup
	I1105 10:12:35.443806   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.444003   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:12:35.444016   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.444100   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.444180   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.444258   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.444349   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.477407   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:12:35.477482   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:12:35.509435   20650 fix.go:56] duration metric: took 13.580030444s for fixHost
	I1105 10:12:35.509456   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.509592   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.509688   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509776   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.510031   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:35.510178   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:35.510185   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:12:35.565839   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830355.864292832
	
	I1105 10:12:35.565852   20650 fix.go:216] guest clock: 1730830355.864292832
	I1105 10:12:35.565857   20650 fix.go:229] Guest: 2024-11-05 10:12:35.864292832 -0800 PST Remote: 2024-11-05 10:12:35.509447 -0800 PST m=+14.061466364 (delta=354.845832ms)
	I1105 10:12:35.565875   20650 fix.go:200] guest clock delta is within tolerance: 354.845832ms
	I1105 10:12:35.565882   20650 start.go:83] releasing machines lock for "ha-213000", held for 13.636511126s
	I1105 10:12:35.565900   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566049   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:35.566151   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566446   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566554   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566709   20650 ssh_runner.go:195] Run: cat /version.json
	I1105 10:12:35.566721   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.566806   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.566888   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.566979   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567064   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.567357   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:12:35.567386   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.567477   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.567559   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.567637   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567715   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.649786   20650 ssh_runner.go:195] Run: systemctl --version
	I1105 10:12:35.655155   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:12:35.659391   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:12:35.659449   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:12:35.672884   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:12:35.672896   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.672997   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:35.691142   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:12:35.700361   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:12:35.709604   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:12:35.709664   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:12:35.718677   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.727574   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:12:35.736665   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.745463   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:12:35.754435   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:12:35.763449   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:12:35.772263   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:12:35.781386   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:12:35.789651   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:12:35.789704   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:12:35.798805   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:12:35.807011   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:35.912193   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:12:35.927985   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.928078   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:12:35.940041   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.954880   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:12:35.969797   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.981073   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:35.992124   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:12:36.016061   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:36.027432   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:36.042843   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:12:36.045910   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:12:36.054070   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:12:36.067653   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:12:36.164803   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:12:36.262358   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:12:36.262434   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:12:36.276549   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:36.372055   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:12:38.718640   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346585524s)
	I1105 10:12:38.718725   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:12:38.729009   20650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:12:38.741745   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:38.752392   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:12:38.846699   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:12:38.961329   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.072900   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:12:39.086802   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:39.097743   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.205555   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:12:39.272726   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:12:39.273861   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:12:39.278279   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:12:39.278336   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:12:39.281386   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:12:39.307263   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:12:39.307378   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.325423   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.384603   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:12:39.384677   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:39.385383   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:12:39.389204   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.398876   20650 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:12:39.398970   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:39.399044   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.411346   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.411370   20650 docker.go:619] Images already preloaded, skipping extraction
	I1105 10:12:39.411458   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.424491   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.424511   20650 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:12:39.424518   20650 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:12:39.424600   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:12:39.424690   20650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:12:39.458782   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:39.458796   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:39.458807   20650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:12:39.458824   20650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:12:39.458910   20650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:12:39.458922   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:12:39.459000   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:12:39.472063   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:12:39.472130   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:12:39.472197   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:12:39.480694   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:12:39.480761   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:12:39.488010   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:12:39.501448   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:12:39.514699   20650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:12:39.528604   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:12:39.542711   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:12:39.545676   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.555042   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.651842   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:12:39.666232   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:12:39.666245   20650 certs.go:194] generating shared ca certs ...
	I1105 10:12:39.666254   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.666455   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:12:39.666548   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:12:39.666558   20650 certs.go:256] generating profile certs ...
	I1105 10:12:39.666641   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:12:39.666660   20650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b
	I1105 10:12:39.666677   20650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:12:39.768951   20650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b ...
	I1105 10:12:39.768965   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b: {Name:mk94691c5901a2a72a9bc83f127c5282216d457c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.769986   20650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b ...
	I1105 10:12:39.770003   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b: {Name:mk80fa552a8414775a1a2e3534b5be60adeae6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.770739   20650 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:12:39.770972   20650 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:12:39.771252   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:12:39.771262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:12:39.771288   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:12:39.771314   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:12:39.771335   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:12:39.771353   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:12:39.771376   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:12:39.771395   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:12:39.771413   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:12:39.771524   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:12:39.771579   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:12:39.771588   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:12:39.771622   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:12:39.771657   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:12:39.771686   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:12:39.771750   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:39.771787   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:12:39.771817   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:39.771836   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:12:39.772313   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:12:39.799103   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:12:39.823713   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:12:39.848122   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:12:39.876362   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:12:39.898968   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:12:39.924496   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:12:39.975578   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:12:40.017567   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:12:40.062386   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:12:40.134510   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:12:40.170763   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:12:40.196135   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:12:40.201525   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:12:40.214259   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222331   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222400   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.235959   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:12:40.247519   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:12:40.256007   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259529   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259576   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.263770   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:12:40.272126   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:12:40.280328   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283753   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283804   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.288095   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:12:40.296378   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:12:40.300009   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:12:40.304421   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:12:40.309440   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:12:40.314156   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:12:40.318720   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:12:40.323054   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:12:40.327653   20650 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:40.327789   20650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:12:40.338896   20650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:12:40.346426   20650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 10:12:40.346451   20650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 10:12:40.346505   20650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 10:12:40.354659   20650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:12:40.354973   20650 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-213000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355052   20650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19910-17277/kubeconfig needs updating (will repair): [kubeconfig missing "ha-213000" cluster setting kubeconfig missing "ha-213000" context setting]
	I1105 10:12:40.355252   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.355659   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355866   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:12:40.356225   20650 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:12:40.356390   20650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 10:12:40.363779   20650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1105 10:12:40.363792   20650 kubeadm.go:597] duration metric: took 17.337248ms to restartPrimaryControlPlane
	I1105 10:12:40.363798   20650 kubeadm.go:394] duration metric: took 36.151791ms to StartCluster
	I1105 10:12:40.363807   20650 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.363904   20650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.364287   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.364493   20650 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:12:40.364506   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:12:40.364518   20650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:12:40.364641   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.406496   20650 out.go:177] * Enabled addons: 
	I1105 10:12:40.427423   20650 addons.go:510] duration metric: took 62.890869ms for enable addons: enabled=[]
	I1105 10:12:40.427463   20650 start.go:246] waiting for cluster config update ...
	I1105 10:12:40.427476   20650 start.go:255] writing updated cluster config ...
	I1105 10:12:40.449627   20650 out.go:201] 
	I1105 10:12:40.470603   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.470682   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.492690   20650 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:12:40.534643   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:40.534678   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:40.534889   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:40.534908   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:40.535035   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.535960   20650 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:40.536081   20650 start.go:364] duration metric: took 95.311µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:12:40.536107   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:40.536116   20650 fix.go:54] fixHost starting: m02
	I1105 10:12:40.536544   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:40.536591   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:40.548252   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59033
	I1105 10:12:40.548561   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:40.548918   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:40.548932   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:40.549159   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:40.549276   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.549386   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:40.549477   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.549545   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:40.550641   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.550670   20650 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:12:40.550679   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:12:40.550782   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:40.571623   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:12:40.592623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:12:40.592918   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.592966   20650 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:12:40.594491   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.594501   20650 main.go:141] libmachine: (ha-213000-m02) DBG | pid 20524 is in state "Stopped"
	I1105 10:12:40.594516   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:12:40.594967   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:12:40.619713   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:12:40.619737   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:40.619893   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619922   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619952   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:40.619999   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:40.620018   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:40.621465   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Pid is 20673
	I1105 10:12:40.621946   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:12:40.621963   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.622060   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:12:40.623801   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:12:40.623940   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:40.623961   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:12:40.623986   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:40.624000   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:40.624015   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:40.624016   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:12:40.624023   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:12:40.624043   20650 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:12:40.624734   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:12:40.624956   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.625445   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:40.625455   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.625562   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:12:40.625653   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:12:40.625748   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.625874   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.626045   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:12:40.626222   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:40.626362   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:12:40.626369   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:40.631955   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:40.641267   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:40.642527   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:40.642544   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:40.642551   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:40.642561   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.034838   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:41.034853   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:41.149888   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:41.149903   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:41.149911   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:41.149917   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.150684   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:41.150696   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:46.914486   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:46.914552   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:46.914564   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:46.937828   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:15.697814   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:15.697829   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.697958   20650 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:13:15.697969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.698068   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.698166   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.698262   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698349   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698429   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.698590   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.698739   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.698748   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:13:15.770158   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:13:15.770174   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.770319   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.770428   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770526   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.770785   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.770922   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.770933   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:15.838124   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:15.838139   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:15.838159   20650 buildroot.go:174] setting up certificates
	I1105 10:13:15.838166   20650 provision.go:84] configureAuth start
	I1105 10:13:15.838173   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.838309   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:15.838391   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.838477   20650 provision.go:143] copyHostCerts
	I1105 10:13:15.838504   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838551   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:15.838557   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838677   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:15.838892   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.838922   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:15.838926   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.839007   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:15.839169   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839200   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:15.839205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839275   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:15.839440   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:13:15.878682   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:15.878747   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:15.878761   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.878912   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.879015   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.879122   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.879221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:15.916727   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:15.916795   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:15.936280   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:15.936341   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:13:15.956339   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:15.956417   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:15.976131   20650 provision.go:87] duration metric: took 137.957663ms to configureAuth
	I1105 10:13:15.976145   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:15.976324   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:15.976339   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:15.976475   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.976573   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.976661   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976740   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976813   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.976940   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.977065   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.977072   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:16.038725   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:16.038739   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:16.038839   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:16.038851   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.038998   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.039098   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039192   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039283   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.039436   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.039572   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.039618   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:16.112446   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:16.112468   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.112623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.112715   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112811   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112892   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.113049   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.113223   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.113236   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:17.783702   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:17.783717   20650 machine.go:96] duration metric: took 37.158599705s to provisionDockerMachine
	I1105 10:13:17.783726   20650 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:13:17.783733   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:17.783744   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.783939   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:17.783953   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.784616   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.785152   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.785404   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.785500   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.822226   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:17.825293   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:17.825304   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:17.825392   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:17.825532   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:17.825538   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:17.825699   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:17.832977   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:17.852599   20650 start.go:296] duration metric: took 68.865935ms for postStartSetup
	I1105 10:13:17.852645   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.852828   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:17.852840   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.852946   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.853034   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.853111   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.853195   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.891315   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:17.891389   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:17.944504   20650 fix.go:56] duration metric: took 37.408724528s for fixHost
	I1105 10:13:17.944528   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.944681   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.944779   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944880   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944973   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.945125   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:17.945257   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:17.945264   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:18.009463   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830397.963598694
	
	I1105 10:13:18.009476   20650 fix.go:216] guest clock: 1730830397.963598694
	I1105 10:13:18.009482   20650 fix.go:229] Guest: 2024-11-05 10:13:17.963598694 -0800 PST Remote: 2024-11-05 10:13:17.944519 -0800 PST m=+56.496923048 (delta=19.079694ms)
	I1105 10:13:18.009492   20650 fix.go:200] guest clock delta is within tolerance: 19.079694ms
	I1105 10:13:18.009495   20650 start.go:83] releasing machines lock for "ha-213000-m02", held for 37.47374268s
	I1105 10:13:18.009512   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.009649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:18.032281   20650 out.go:177] * Found network options:
	I1105 10:13:18.052088   20650 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:13:18.073014   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.073053   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.073969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074186   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074319   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:13:18.074355   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:13:18.074369   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.074467   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:18.074483   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:18.074488   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074646   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074801   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.074850   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074993   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:18.075008   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.075127   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:13:18.108947   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:18.109027   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:18.155414   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:18.155436   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.155551   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.172114   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:18.180388   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:18.188528   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.188587   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:18.196712   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.204897   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:18.213206   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.221579   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:18.230149   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:18.238366   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:18.246617   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:18.255037   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:18.262631   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:18.262690   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:18.270933   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:18.278375   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.375712   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:18.394397   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.394485   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:18.410636   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.423391   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:18.441876   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.452612   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.462897   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:18.485662   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.495897   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.511009   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:18.513991   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:18.521476   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:18.534868   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:18.632191   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:18.734981   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.735009   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:18.749050   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.853897   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:13:21.134871   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28097554s)
	I1105 10:13:21.134948   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:13:21.146360   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.157264   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:13:21.267741   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:13:21.382285   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.483458   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:13:21.496077   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.506512   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.618640   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:13:21.685448   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:13:21.685559   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:13:21.689888   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:13:21.689958   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:13:21.693059   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:13:21.721401   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:13:21.721489   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.737796   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.775162   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:13:21.818311   20650 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:13:21.839158   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:21.839596   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:13:21.844257   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:21.854347   20650 mustload.go:65] Loading cluster: ha-213000
	I1105 10:13:21.854526   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:21.854763   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.854810   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.866117   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59055
	I1105 10:13:21.866449   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.866785   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.866795   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.867005   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.867094   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:13:21.867180   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:21.867248   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:13:21.868436   20650 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:13:21.868696   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.868721   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.879648   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59057
	I1105 10:13:21.879951   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.880304   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.880326   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.880564   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.880680   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:13:21.880800   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:13:21.880806   20650 certs.go:194] generating shared ca certs ...
	I1105 10:13:21.880817   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:13:21.880976   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:13:21.881033   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:13:21.881041   20650 certs.go:256] generating profile certs ...
	I1105 10:13:21.881133   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:13:21.881677   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:13:21.881747   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:13:21.881756   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:13:21.881777   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:13:21.881800   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:13:21.881819   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:13:21.881837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:13:21.881855   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:13:21.881874   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:13:21.881891   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:13:21.881971   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:13:21.882008   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:13:21.882016   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:13:21.882051   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:13:21.882090   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:13:21.882131   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:13:21.882199   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:21.882240   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:21.882262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:13:21.882285   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:13:21.882314   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:13:21.882395   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:13:21.882480   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:13:21.882563   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:13:21.882639   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:13:21.908416   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:13:21.911559   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:13:21.921605   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:13:21.924753   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:13:21.933495   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:13:21.936611   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:13:21.945312   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:13:21.948273   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:13:21.957659   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:13:21.960739   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:13:21.969191   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:13:21.972356   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:13:21.981306   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:13:22.001469   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:13:22.021181   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:13:22.040587   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:13:22.060078   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:13:22.079285   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:13:22.098538   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:13:22.118296   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:13:22.137769   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:13:22.156929   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:13:22.176353   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:13:22.195510   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:13:22.209194   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:13:22.222827   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:13:22.236546   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:13:22.250070   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:13:22.263444   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:13:22.276970   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:13:22.290700   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:13:22.294935   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:13:22.304164   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307578   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307635   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.311940   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:13:22.320904   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:13:22.329872   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333271   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333318   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.337523   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:13:22.346681   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:13:22.355874   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359764   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359823   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.364168   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:13:22.373288   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:13:22.376713   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:13:22.381681   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:13:22.386495   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:13:22.390985   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:13:22.395318   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:13:22.399578   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:13:22.403998   20650 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:13:22.404052   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:13:22.404067   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:13:22.404115   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:13:22.417096   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:13:22.417139   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:13:22.417203   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:13:22.426058   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:13:22.426117   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:13:22.434774   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:13:22.448444   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:13:22.461910   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:13:22.475772   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:13:22.478602   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:22.487944   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.594180   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.608389   20650 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:13:22.608597   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:22.629533   20650 out.go:177] * Verifying Kubernetes components...
	I1105 10:13:22.671507   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.795219   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.807186   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:13:22.807391   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:13:22.807429   20650 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:13:22.807616   20650 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:22.807698   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:22.807704   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:22.807711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:22.807714   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.750948   20650 round_trippers.go:574] Response Status: 200 OK in 8943 milliseconds
	I1105 10:13:31.752572   20650 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:13:31.752585   20650 node_ready.go:38] duration metric: took 8.945035646s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:31.752614   20650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:31.752661   20650 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:13:31.752671   20650 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:13:31.752720   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:31.752727   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.752733   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.752738   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.802951   20650 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1105 10:13:31.809829   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.809889   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:13:31.809894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.809900   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.809904   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.814415   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:31.815355   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.815363   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.815369   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.815373   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.822380   20650 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:13:31.822662   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.822672   20650 pod_ready.go:82] duration metric: took 12.826683ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822679   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822728   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:13:31.822733   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.822739   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.822744   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.826328   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.826822   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.826831   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.826837   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.826841   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.829860   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.830181   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.830191   20650 pod_ready.go:82] duration metric: took 7.507226ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830198   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830235   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:13:31.830240   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.830245   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.830252   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.832219   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:31.832697   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.832706   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.832711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.832715   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.835276   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.835692   20650 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.835701   20650 pod_ready.go:82] duration metric: took 5.498306ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835709   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835747   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:13:31.835752   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.835758   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.835762   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.841537   20650 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:13:31.841973   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:31.841981   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.841986   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.841990   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844531   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.844869   20650 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.844879   20650 pod_ready.go:82] duration metric: took 9.164525ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844885   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844921   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:13:31.844926   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.844931   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844936   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.848600   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.952821   20650 request.go:632] Waited for 103.696334ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952860   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952865   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.952873   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.952877   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.955043   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:31.955226   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955236   20650 pod_ready.go:82] duration metric: took 110.346207ms for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:31.955242   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955257   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.153855   20650 request.go:632] Waited for 198.56381ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153901   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153906   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.153912   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.153915   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.156326   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.354721   20650 request.go:632] Waited for 197.883079ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354800   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354808   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.354816   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.354821   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.357314   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.357758   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.357771   20650 pod_ready.go:82] duration metric: took 402.50745ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.357779   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.554904   20650 request.go:632] Waited for 197.060501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555009   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555040   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.555059   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.555071   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.562819   20650 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 10:13:32.752788   20650 request.go:632] Waited for 189.599558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752820   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752825   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.752864   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.752870   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.755075   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.755378   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.755387   20650 pod_ready.go:82] duration metric: took 397.605979ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.755394   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.952787   20650 request.go:632] Waited for 197.357502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952836   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952842   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.952853   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.955636   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.153249   20650 request.go:632] Waited for 196.999871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153317   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153323   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.153329   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.153334   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.155712   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:33.155782   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155797   20650 pod_ready.go:82] duration metric: took 400.400564ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:33.155804   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155810   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.353944   20650 request.go:632] Waited for 198.075152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354021   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354033   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.354041   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.354047   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.356715   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.553130   20650 request.go:632] Waited for 196.01942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553198   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553204   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.553237   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.553242   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.555527   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.555890   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.555899   20650 pod_ready.go:82] duration metric: took 400.086552ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.555906   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.752845   20650 request.go:632] Waited for 196.894857ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752909   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752915   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.752921   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.752925   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.754805   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.953311   20650 request.go:632] Waited for 197.807461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953353   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953381   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.953389   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.953392   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.955376   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.955836   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.955846   20650 pod_ready.go:82] duration metric: took 399.938695ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.955855   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.153021   20650 request.go:632] Waited for 197.093812ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153060   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153065   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.153072   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.153075   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.155546   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:34.353423   20650 request.go:632] Waited for 197.340662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353457   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353463   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.353469   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.353472   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.355383   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.355495   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355514   20650 pod_ready.go:82] duration metric: took 399.657027ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.355524   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355532   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.553620   20650 request.go:632] Waited for 198.034445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553677   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553683   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.553689   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.553694   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.555564   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:34.753369   20650 request.go:632] Waited for 197.394131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753424   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753431   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.753436   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.753440   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.755363   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.755426   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755436   20650 pod_ready.go:82] duration metric: took 399.890345ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.755442   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755446   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.953531   20650 request.go:632] Waited for 198.038372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953615   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953624   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.953631   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.953636   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.955951   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.153813   20650 request.go:632] Waited for 196.981939ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153879   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.153903   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.153910   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.156466   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.157099   20650 pod_ready.go:93] pod "kube-proxy-m45pk" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.157109   20650 pod_ready.go:82] duration metric: took 401.65588ms for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.157117   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.354248   20650 request.go:632] Waited for 197.082179ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354294   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354302   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.354340   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.354347   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.357098   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.552778   20650 request.go:632] Waited for 195.237923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552882   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552910   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.552918   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.552923   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.555242   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.555725   20650 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.555734   20650 pod_ready.go:82] duration metric: took 398.615884ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.555748   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.752802   20650 request.go:632] Waited for 196.982082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752849   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752855   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.752861   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.752865   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.755216   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.953665   20650 request.go:632] Waited for 197.923503ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953733   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953742   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.953751   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.953758   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.955875   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.956268   20650 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.956277   20650 pod_ready.go:82] duration metric: took 400.526917ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.956283   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.153409   20650 request.go:632] Waited for 197.086533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153486   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153496   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.153504   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.153513   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.156474   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.354367   20650 request.go:632] Waited for 197.602225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354401   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354406   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.354421   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.354441   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.356601   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.356994   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.357004   20650 pod_ready.go:82] duration metric: took 400.718541ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.357011   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.554145   20650 request.go:632] Waited for 197.038016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554243   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554252   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.554264   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.554270   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.556774   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.753404   20650 request.go:632] Waited for 196.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753437   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753442   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.753448   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.753452   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.756764   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:36.757112   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.757122   20650 pod_ready.go:82] duration metric: took 400.109512ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.757130   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.953514   20650 request.go:632] Waited for 196.347448ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953546   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953558   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.953565   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.953575   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.955940   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.154619   20650 request.go:632] Waited for 198.194145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154663   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154669   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.154676   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.154695   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.157438   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:37.157524   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157535   20650 pod_ready.go:82] duration metric: took 400.40261ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:37.157542   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157547   20650 pod_ready.go:39] duration metric: took 5.404967892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:37.157569   20650 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:13:37.157646   20650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:13:37.171805   20650 api_server.go:72] duration metric: took 14.563521484s to wait for apiserver process to appear ...
	I1105 10:13:37.171821   20650 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:13:37.171836   20650 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:13:37.176463   20650 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:13:37.176507   20650 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:13:37.176512   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.176518   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.176523   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.177377   20650 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:13:37.177442   20650 api_server.go:141] control plane version: v1.31.2
	I1105 10:13:37.177460   20650 api_server.go:131] duration metric: took 5.62791ms to wait for apiserver health ...
	I1105 10:13:37.177467   20650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:13:37.352914   20650 request.go:632] Waited for 175.404088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352969   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352975   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.352982   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.352986   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.357439   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.362936   20650 system_pods.go:59] 26 kube-system pods found
	I1105 10:13:37.362960   20650 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.362964   20650 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.362968   20650 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.362973   20650 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.362980   20650 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.362984   20650 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.362987   20650 system_pods.go:61] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.362993   20650 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.362999   20650 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.363003   20650 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.363007   20650 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.363013   20650 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.363016   20650 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.363021   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.363024   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.363027   20650 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.363030   20650 system_pods.go:61] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.363036   20650 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 10:13:37.363040   20650 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.363042   20650 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.363046   20650 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.363051   20650 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.363055   20650 system_pods.go:61] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.363059   20650 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.363063   20650 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.363065   20650 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.363070   20650 system_pods.go:74] duration metric: took 185.599377ms to wait for pod list to return data ...
	I1105 10:13:37.363076   20650 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:13:37.554093   20650 request.go:632] Waited for 190.967335ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554130   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554138   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.554152   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.554156   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.557460   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:37.557594   20650 default_sa.go:45] found service account: "default"
	I1105 10:13:37.557604   20650 default_sa.go:55] duration metric: took 194.526347ms for default service account to be created ...
	I1105 10:13:37.557612   20650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:13:37.752842   20650 request.go:632] Waited for 195.185977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752875   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752881   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.752902   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.752907   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.757021   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.762493   20650 system_pods.go:86] 26 kube-system pods found
	I1105 10:13:37.762505   20650 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.762509   20650 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.762512   20650 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.762517   20650 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.762521   20650 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.762525   20650 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.762528   20650 system_pods.go:89] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.762532   20650 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.762535   20650 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.762539   20650 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.762543   20650 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.762548   20650 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.762551   20650 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.762557   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.762561   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.762566   20650 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.762569   20650 system_pods.go:89] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.762572   20650 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:13:37.762575   20650 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.762578   20650 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.762583   20650 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.762590   20650 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.762594   20650 system_pods.go:89] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.762596   20650 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.762600   20650 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.762602   20650 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.762607   20650 system_pods.go:126] duration metric: took 204.991818ms to wait for k8s-apps to be running ...
	I1105 10:13:37.762614   20650 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:13:37.762682   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:13:37.777110   20650 system_svc.go:56] duration metric: took 14.491738ms WaitForService to wait for kubelet
	I1105 10:13:37.777127   20650 kubeadm.go:582] duration metric: took 15.16885159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:13:37.777138   20650 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:13:37.952770   20650 request.go:632] Waited for 175.557407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952816   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952827   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.952839   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.955592   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.956364   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956379   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956390   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956393   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956397   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956399   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956403   20650 node_conditions.go:105] duration metric: took 179.263041ms to run NodePressure ...
	I1105 10:13:37.956411   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:13:37.956426   20650 start.go:255] writing updated cluster config ...
	I1105 10:13:37.978800   20650 out.go:201] 
	I1105 10:13:38.000237   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:38.000353   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.022912   20650 out.go:177] * Starting "ha-213000-m04" worker node in "ha-213000" cluster
	I1105 10:13:38.065816   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:13:38.065838   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:13:38.065942   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:13:38.065952   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:13:38.066024   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.066548   20650 start.go:360] acquireMachinesLock for ha-213000-m04: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:13:38.066601   20650 start.go:364] duration metric: took 39.836µs to acquireMachinesLock for "ha-213000-m04"
	I1105 10:13:38.066614   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:13:38.066619   20650 fix.go:54] fixHost starting: m04
	I1105 10:13:38.066839   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:38.066859   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:38.078183   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59062
	I1105 10:13:38.078511   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:38.078858   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:38.078877   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:38.079111   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:38.079203   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.079308   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:13:38.079392   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.079457   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:13:38.080557   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.080601   20650 fix.go:112] recreateIfNeeded on ha-213000-m04: state=Stopped err=<nil>
	I1105 10:13:38.080610   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	W1105 10:13:38.080695   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:13:38.101909   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m04" ...
	I1105 10:13:38.150121   20650 main.go:141] libmachine: (ha-213000-m04) Calling .Start
	I1105 10:13:38.150270   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.150297   20650 main.go:141] libmachine: (ha-213000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid
	I1105 10:13:38.151495   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.151504   20650 main.go:141] libmachine: (ha-213000-m04) DBG | pid 20571 is in state "Stopped"
	I1105 10:13:38.151536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid...
	I1105 10:13:38.151981   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Using UUID 70721578-92b7-4edc-935c-43ebcacd790c
	I1105 10:13:38.175524   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Generated MAC 1a:a3:f2:a5:2e:39
	I1105 10:13:38.175551   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:13:38.175756   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175805   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175883   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70721578-92b7-4edc-935c-43ebcacd790c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:13:38.175929   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70721578-92b7-4edc-935c-43ebcacd790c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:13:38.175943   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:13:38.177358   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Pid is 20690
	I1105 10:13:38.177760   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Attempt 0
	I1105 10:13:38.177775   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.177790   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:13:38.179817   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Searching for 1a:a3:f2:a5:2e:39 in /var/db/dhcpd_leases ...
	I1105 10:13:38.179881   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:13:38.179891   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:13:38.179930   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:13:38.179944   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:13:38.179961   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:13:38.179966   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found match: 1a:a3:f2:a5:2e:39
	I1105 10:13:38.179974   20650 main.go:141] libmachine: (ha-213000-m04) DBG | IP: 192.169.0.8
	I1105 10:13:38.180001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetConfigRaw
	I1105 10:13:38.180736   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:38.180968   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.181459   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:13:38.181471   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.181605   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:38.181707   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:38.181828   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.181929   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.182026   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:38.182165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:38.182315   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:38.182325   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:13:38.188897   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:13:38.198428   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:13:38.199856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.199886   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.199916   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.199953   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.594841   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:13:38.594856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:13:38.709716   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.709736   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.709743   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.709759   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.710592   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:13:38.710604   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:13:44.475519   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:13:44.475536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:13:44.475546   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:13:44.498793   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:49.237329   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:49.237349   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237488   20650 buildroot.go:166] provisioning hostname "ha-213000-m04"
	I1105 10:13:49.237500   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237590   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.237684   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.237765   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237842   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237935   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.238078   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.238220   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.238229   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m04 && echo "ha-213000-m04" | sudo tee /etc/hostname
	I1105 10:13:49.297417   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m04
	
	I1105 10:13:49.297437   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.297576   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.297673   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297757   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297853   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.297997   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.298162   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.298173   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:49.354308   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:49.354323   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:49.354341   20650 buildroot.go:174] setting up certificates
	I1105 10:13:49.354349   20650 provision.go:84] configureAuth start
	I1105 10:13:49.354357   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.354507   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:49.354606   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.354711   20650 provision.go:143] copyHostCerts
	I1105 10:13:49.354741   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354793   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:49.354799   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354909   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:49.355124   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355155   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:49.355159   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355228   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:49.355419   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355454   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:49.355461   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355528   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:49.355690   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m04 san=[127.0.0.1 192.169.0.8 ha-213000-m04 localhost minikube]
	I1105 10:13:49.396705   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:49.396767   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:49.396780   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.396910   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.397015   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.397117   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.397221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:49.427813   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:49.427885   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:49.447457   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:49.447518   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:49.467286   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:49.467359   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:13:49.487192   20650 provision.go:87] duration metric: took 132.83626ms to configureAuth
	I1105 10:13:49.487209   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:49.487380   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:49.487394   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:49.487531   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.487631   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.487715   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487801   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487890   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.488033   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.488154   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.488162   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:49.537465   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:49.537478   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:49.537561   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:49.537571   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.537704   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.537799   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537884   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537998   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.538165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.538298   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.538345   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:49.598479   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:49.598502   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.598649   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.598747   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598833   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598947   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.599089   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.599234   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.599246   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:51.207763   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:51.207782   20650 machine.go:96] duration metric: took 13.026432223s to provisionDockerMachine
	I1105 10:13:51.207792   20650 start.go:293] postStartSetup for "ha-213000-m04" (driver="hyperkit")
	I1105 10:13:51.207801   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:51.207816   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.208031   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:51.208047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.208140   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.208231   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.208318   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.208438   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.241123   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:51.244240   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:51.244251   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:51.244336   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:51.244477   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:51.244484   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:51.244646   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:51.252753   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:51.271782   20650 start.go:296] duration metric: took 63.980744ms for postStartSetup
	I1105 10:13:51.271803   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.271989   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:51.272001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.272093   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.272178   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.272277   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.272371   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.304392   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:51.304469   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:51.358605   20650 fix.go:56] duration metric: took 13.292102469s for fixHost
	I1105 10:13:51.358630   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.358783   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.358880   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.358963   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.359053   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.359195   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:51.359329   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:51.359336   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:51.407868   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830431.709090009
	
	I1105 10:13:51.407885   20650 fix.go:216] guest clock: 1730830431.709090009
	I1105 10:13:51.407890   20650 fix.go:229] Guest: 2024-11-05 10:13:51.709090009 -0800 PST Remote: 2024-11-05 10:13:51.35862 -0800 PST m=+89.911326584 (delta=350.470009ms)
	I1105 10:13:51.407901   20650 fix.go:200] guest clock delta is within tolerance: 350.470009ms
	I1105 10:13:51.407906   20650 start.go:83] releasing machines lock for "ha-213000-m04", held for 13.34141889s
	I1105 10:13:51.407923   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.408055   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:51.430524   20650 out.go:177] * Found network options:
	I1105 10:13:51.451633   20650 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:13:51.472140   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.472164   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.472179   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472739   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472888   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.473015   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1105 10:13:51.473025   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.473039   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.473047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473124   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:51.473137   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473175   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473286   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473299   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473387   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473400   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473487   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.473517   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473599   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	W1105 10:13:51.501432   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:51.501515   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:51.553972   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:51.553993   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.554083   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.569365   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:51.577607   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:51.586014   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:51.586084   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:51.594293   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.602646   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:51.610969   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.619400   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:51.627741   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:51.635982   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:51.645401   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:51.653565   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:51.660899   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:51.660963   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:51.669419   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:51.677143   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:51.772664   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:51.792178   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.792270   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:51.808083   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.820868   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:51.842221   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.854583   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.865539   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:51.892869   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.904042   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.922494   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:51.928520   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:51.945780   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:51.962437   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:52.060460   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:52.163232   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:52.163260   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:52.178328   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:52.296397   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:14:53.349067   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016016812s)
	I1105 10:14:53.349159   20650 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:14:53.385876   20650 out.go:201] 
	W1105 10:14:53.422606   20650 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:14:53.422674   20650 out.go:270] * 
	W1105 10:14:53.423462   20650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:14:53.533703   20650 out.go:201] 
	
	
	==> Docker <==
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321144470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358583815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358913638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358923588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.359308274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371019459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371180579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371195366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371264075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384883251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384945765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384958316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.385102977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.393595106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396454919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396464389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396559087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:54 ha-213000 dockerd[1151]: time="2024-11-05T18:14:54.321538330Z" level=info msg="ignoring event" container=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322187590Z" level=info msg="shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322448589Z" level=warning msg="cleaning up after shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322490228Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289904323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289952412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289962172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.290120529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b4e2f8c824d26       6e38f40d628db       About a minute ago   Running             storage-provisioner       5                   7a18da25cf537       storage-provisioner
	568ed995df15d       8c811b4aec35f       About a minute ago   Running             busybox                   2                   f5d092375dddf       busybox-7dff88458-q5j74
	a54d96a8e9e4d       9ca7e41918271       About a minute ago   Running             kindnet-cni               2                   07702f76ce639       kindnet-hppzk
	820b778421b38       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   bc67a22cb5eff       coredns-7c65d6cfc9-cv2cc
	ca9011bea4440       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   703f8fe612ac5       coredns-7c65d6cfc9-q96rw
	85e7cccdf4831       505d571f5fd56       About a minute ago   Running             kube-proxy                2                   7a4f7e3a95ced       kube-proxy-s8xxj
	ea27059bb8dad       6e38f40d628db       About a minute ago   Exited              storage-provisioner       4                   7a18da25cf537       storage-provisioner
	43950f04c89aa       0486b6c53a1b5       2 minutes ago        Running             kube-controller-manager   4                   3c4a95766d8df       kube-controller-manager-ha-213000
	8e0c0916fca71       9499c9960544e       2 minutes ago        Running             kube-apiserver            4                   f2454c695936e       kube-apiserver-ha-213000
	897300e44633b       baf03d14a86fd       3 minutes ago        Running             kube-vip                  1                   f00a17fab8835       kube-vip-ha-213000
	ad7975173845f       847c7bc1a5418       3 minutes ago        Running             kube-scheduler            2                   5162e28d0e03d       kube-scheduler-ha-213000
	8a28e20a2bf3d       2e96e5913fc06       3 minutes ago        Running             etcd                      2                   acdca4d26c9f6       etcd-ha-213000
	ea0b432d94423       0486b6c53a1b5       3 minutes ago        Exited              kube-controller-manager   3                   3c4a95766d8df       kube-controller-manager-ha-213000
	16b5e8baed219       9499c9960544e       3 minutes ago        Exited              kube-apiserver            3                   f2454c695936e       kube-apiserver-ha-213000
	96799b06e508f       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   07d926acb1a6e       busybox-7dff88458-q5j74
	86ef547964bcb       c69fa2e9cbf5f       5 minutes ago        Exited              coredns                   1                   5fe3e01a4f33a       coredns-7c65d6cfc9-q96rw
	dd08019aca606       c69fa2e9cbf5f       5 minutes ago        Exited              coredns                   1                   00f7c155eb4b0       coredns-7c65d6cfc9-cv2cc
	4aec0d02658e0       505d571f5fd56       5 minutes ago        Exited              kube-proxy                1                   1ece5e2bcaf09       kube-proxy-s8xxj
	f9a05b099e4ee       9ca7e41918271       5 minutes ago        Exited              kindnet-cni               1                   fd311d6ed9c5c       kindnet-hppzk
	51c2df7fc859d       baf03d14a86fd       6 minutes ago        Exited              kube-vip                  0                   98323683c9082       kube-vip-ha-213000
	bdbc1a6e54924       2e96e5913fc06       6 minutes ago        Exited              etcd                      1                   474c9f706901d       etcd-ha-213000
	f1607d6ea7a30       847c7bc1a5418       6 minutes ago        Exited              kube-scheduler            1                   b217215a9cf0c       kube-scheduler-ha-213000
	
	
	==> coredns [820b778421b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59240 - 59060 "HINFO IN 4329632244317726903.7890662898760833477. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011788676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[675101378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.641) (total time: 30001ms):
	Trace[675101378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.641)
	Trace[675101378]: [30.00107355s] [30.00107355s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[792881874]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30001ms):
	Trace[792881874]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:14:54.642)
	Trace[792881874]: [30.001711346s] [30.001711346s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[34248386]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[34248386]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[34248386]: [30.000366606s] [30.000366606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [86ef547964bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33774 - 54633 "HINFO IN 1409488340311598538.4125883895955909161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004156009s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1322590960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1322590960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.870)
	Trace[1322590960]: [30.003129161s] [30.003129161s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1548400132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30002ms):
	Trace[1548400132]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:11:00.870)
	Trace[1548400132]: [30.002952972s] [30.002952972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1633349832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.870) (total time: 30002ms):
	Trace[1633349832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[1633349832]: [30.002091533s] [30.002091533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca9011bea444] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47030 - 28453 "HINFO IN 9030478600017221968.7137590874178245370. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011696462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[954770416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30002ms):
	Trace[954770416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:14:54.642)
	Trace[954770416]: [30.002259073s] [30.002259073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1172241105]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1172241105]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[1172241105]: [30.000198867s] [30.000198867s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1149531028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1149531028]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.645)
	Trace[1149531028]: [30.000272321s] [30.000272321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [dd08019aca60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56311 - 34269 "HINFO IN 2200850437967647570.948968209837946997. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0110095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819586440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30001ms):
	Trace[819586440]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:11:00.870)
	Trace[819586440]: [30.001860838s] [30.001860838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[58172056]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.869) (total time: 30000ms):
	Trace[58172056]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[58172056]: [30.000759284s] [30.000759284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1700347832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1700347832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.871)
	Trace[1700347832]: [30.003960758s] [30.003960758s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1892e4225dd5499cb35e29ff753a0c40
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    872d5ac1-d893-413e-b883-f1ad425b7c82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           14m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeHasSufficientPID     7m7s (x7 over 7m7s)    kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m7s (x8 over 7m7s)    kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s (x8 over 7m7s)    kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m39s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m39s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m39s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m44s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           2m44s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           21s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dc248d7debd421bb4108dc092da24e0
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    8a40793c-3b3c-49c9-a112-66a753c3fa07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m41s                  kube-proxy       
	  Normal  Starting                 6m8s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-213000-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     6m24s (x7 over 6m24s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m24s (x8 over 6m24s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s (x8 over 6m24s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m56s (x8 over 2m56s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m56s (x8 over 2m56s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m56s (x7 over 2m56s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m44s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           2m44s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           21s                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:11:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 efb6d3b228624c8f9582b78a04751815
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    6405d175-8027-4e75-bb1e-1845fbf67784
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-28tbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kindnet-p4bx6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-m45pk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 4m35s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m12s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           6m11s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             5m32s                  node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           5m29s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m37s (x2 over 4m37s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m37s (x2 over 4m37s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x2 over 4m37s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m37s                  kubelet          Node ha-213000-m04 has been rebooted, boot id: 6405d175-8027-4e75-bb1e-1845fbf67784
	  Normal   NodeReady                4m37s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m44s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           2m44s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             2m4s                   node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           21s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	
	
	Name:               ha-213000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_15_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:16:10 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:16:10 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:16:10 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:16:10 +0000   Tue, 05 Nov 2024 18:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-213000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba49d86a1883402ebcff4760f7173855
	  System UUID:                39144d91-0000-0000-8f4c-e91cd4ad9fd9
	  Boot ID:                    dad28c98-204b-4595-92ed-10d65834fde9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-213000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27s
	  kube-system                 kindnet-gncwv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      29s
	  kube-system                 kube-apiserver-ha-213000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-ha-213000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-njqc5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-ha-213000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-vip-ha-213000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node ha-213000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node ha-213000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node ha-213000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	  Normal  RegisteredNode           21s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036175] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007972] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.844917] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000007] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006614] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702887] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.233657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.342806] systemd-fstab-generator[457]: Ignoring "noauto" option for root device
	[  +0.102790] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +2.007272] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.269734] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.085327] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.060857] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.057582] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.475879] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.104726] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.119211] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.130514] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.455084] systemd-fstab-generator[1568]: Ignoring "noauto" option for root device
	[  +6.862189] kauditd_printk_skb: 190 callbacks suppressed
	[Nov 5 18:13] kauditd_printk_skb: 40 callbacks suppressed
	[Nov 5 18:14] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8a28e20a2bf3] <==
	{"level":"info","ts":"2024-11-05T18:14:55.084484Z","caller":"traceutil/trace.go:171","msg":"trace[689855107] transaction","detail":"{read_only:false; response_revision:2931; number_of_response:1; }","duration":"110.3233ms","start":"2024-11-05T18:14:54.974150Z","end":"2024-11-05T18:14:55.084473Z","steps":["trace[689855107] 'process raft request'  (duration: 110.263526ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T18:15:51.034889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6366593563784330242 13314548521573537860) learners=(8641313866221225839)"}
	{"level":"info","ts":"2024-11-05T18:15:51.035473Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"77ec1d2d7cc6076f","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-11-05T18:15:51.035571Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.035712Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.036421Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.036659Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037105Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037265Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-11-05T18:15:51.037295Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037162Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"warn","ts":"2024-11-05T18:15:51.070004Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"77ec1d2d7cc6076f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-11-05T18:15:51.205387Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.9:2380/version","remote-member-id":"77ec1d2d7cc6076f","error":"Get \"https://192.169.0.9:2380/version\": dial tcp 192.169.0.9:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:15:51.205445Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77ec1d2d7cc6076f","error":"Get \"https://192.169.0.9:2380/version\": dial tcp 192.169.0.9:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:15:51.564464Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"77ec1d2d7cc6076f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-11-05T18:15:52.011350Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"77ec1d2d7cc6076f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-11-05T18:15:52.011393Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.011407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.016006Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"77ec1d2d7cc6076f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-11-05T18:15:52.016118Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.027894Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.031268Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.565744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6366593563784330242 8641313866221225839 13314548521573537860)"}
	{"level":"info","ts":"2024-11-05T18:15:52.565834Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-11-05T18:15:52.565950Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"77ec1d2d7cc6076f"}
	
	
	==> etcd [bdbc1a6e5492] <==
	{"level":"warn","ts":"2024-11-05T18:12:13.699058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:09.275669Z","time spent":"4.423385981s","remote":"127.0.0.1:52268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:13.283499Z","time spent":"415.604721ms","remote":"127.0.0.1:52350","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.487277082s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699158Z","caller":"traceutil/trace.go:171","msg":"trace[1772748615] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"7.487289106s","start":"2024-11-05T18:12:06.211867Z","end":"2024-11-05T18:12:13.699156Z","steps":["trace[1772748615] 'agreement among raft nodes before linearized reading'  (duration: 7.487277083s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699169Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:06.211838Z","time spent":"7.487327421s","remote":"127.0.0.1:52456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.037776693s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699221Z","caller":"traceutil/trace.go:171","msg":"trace[763418090] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"2.037787826s","start":"2024-11-05T18:12:11.661430Z","end":"2024-11-05T18:12:13.699218Z","steps":["trace[763418090] 'agreement among raft nodes before linearized reading'  (duration: 2.037776524s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:11.661414Z","time spent":"2.03781384s","remote":"127.0.0.1:52228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.734339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-05T18:12:13.734385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-11-05T18:12:13.734444Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-11-05T18:12:13.734706Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734723Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734820Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734866Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.735810Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735871Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735879Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-213000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 18:16:19 up 3 min,  0 users,  load average: 0.33, 0.17, 0.07
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a54d96a8e9e4] <==
	I1105 18:15:55.792775       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:15:55.792888       1 main.go:301] handling current node
	I1105 18:15:55.792908       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:15:55.792917       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:15:55.793456       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:15:55.793579       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:15:55.793978       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:15:55.794105       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	I1105 18:15:55.794696       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0 Realm: 0} 
	I1105 18:16:05.793158       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:16:05.793224       1 main.go:301] handling current node
	I1105 18:16:05.793241       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:16:05.793581       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:16:05.794428       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:16:05.794608       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:16:05.795140       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:16:05.795336       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	I1105 18:16:15.792658       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:16:15.792706       1 main.go:301] handling current node
	I1105 18:16:15.792719       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:16:15.792725       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:16:15.793418       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:16:15.793485       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:16:15.797231       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:16:15.797258       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f9a05b099e4e] <==
	I1105 18:11:41.574590       1 main.go:301] handling current node
	I1105 18:11:41.574599       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:41.574604       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:41.574749       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:41.574789       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567175       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:11:51.567282       1 main.go:301] handling current node
	I1105 18:11:51.567311       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:51.567325       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:51.567514       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:51.567574       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567879       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:11:51.567959       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:01.566316       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:01.566340       1 main.go:301] handling current node
	I1105 18:12:01.566353       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:01.566358       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:01.566565       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:01.566573       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:11.571151       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:11.571336       1 main.go:301] handling current node
	I1105 18:12:11.571478       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:11.571602       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:11.572596       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:11.572626       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [16b5e8baed21] <==
	I1105 18:12:47.610850       1 options.go:228] external host was not specified, using 192.169.0.5
	I1105 18:12:47.613755       1 server.go:142] Version: v1.31.2
	I1105 18:12:47.614011       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.895871       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:12:48.898884       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:12:48.901520       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:12:48.901573       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:12:48.902234       1 instance.go:232] Using reconciler: lease
	W1105 18:13:08.892813       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1105 18:13:08.896286       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1105 18:13:08.903685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W1105 18:13:08.903693       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [8e0c0916fca7] <==
	I1105 18:13:32.048504       1 establishing_controller.go:81] Starting EstablishingController
	I1105 18:13:32.048599       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1105 18:13:32.048646       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1105 18:13:32.048673       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1105 18:13:32.111932       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:13:32.112352       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:13:32.112415       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:13:32.112712       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:13:32.112790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:13:32.115714       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:13:32.115760       1 policy_source.go:224] refreshing policies
	I1105 18:13:32.115832       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:13:32.118673       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:13:32.126538       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:13:32.129328       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:13:32.136801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:13:32.137650       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:13:32.137679       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:13:32.137683       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:13:32.137688       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:13:32.144136       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1105 18:13:32.162460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1105 18:13:33.018201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:13:33.274965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:14:23.399590       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [43950f04c89a] <==
	I1105 18:15:03.665281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="257.839µs"
	I1105 18:15:03.683973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.253624ms"
	I1105 18:15:03.684142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.592µs"
	E1105 18:15:50.681201       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9ljhr failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9ljhr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1105 18:15:50.684700       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9ljhr failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9ljhr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1105 18:15:50.808750       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-213000-m05\" does not exist"
	I1105 18:15:50.821950       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-213000-m05" podCIDRs=["10.244.2.0/24"]
	I1105 18:15:50.821995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:50.822017       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:50.837924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:51.008189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:52.758496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:53.381023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:53.483104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:55.535311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:55.535903       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-213000-m05"
	I1105 18:15:58.376422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:15:58.431221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:58.475735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:16:00.948091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:05.645035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:08.579705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.430941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.443871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.527913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	
	
	==> kube-controller-manager [ea0b432d9442] <==
	I1105 18:12:48.246520       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:12:48.777745       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:12:48.777814       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.783136       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:12:48.783574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:12:48.783729       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:12:48.783931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1105 18:13:09.910735       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [4aec0d02658e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:10:30.967416       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:10:30.985864       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:10:30.985986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:10:31.019992       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:10:31.020085       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:10:31.020128       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:10:31.022301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:10:31.022843       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:10:31.022888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:10:31.026969       1 config.go:199] "Starting service config controller"
	I1105 18:10:31.027078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:10:31.027666       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:10:31.027692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:10:31.028138       1 config.go:328] "Starting node config controller"
	I1105 18:10:31.028170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:10:31.130453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:10:31.130459       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:10:31.130467       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [85e7cccdf483] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:14:24.812805       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:14:24.832536       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:14:24.832803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:14:24.864245       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:14:24.864284       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:14:24.864314       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:14:24.866476       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:14:24.868976       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:14:24.869009       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:14:24.872199       1 config.go:199] "Starting service config controller"
	I1105 18:14:24.872427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:14:24.872629       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:14:24.872656       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:14:24.874721       1 config.go:328] "Starting node config controller"
	I1105 18:14:24.874748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:14:24.974138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:14:24.974427       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:14:24.975147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad7975173845] <==
	W1105 18:13:17.072213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.072242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.177384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.177607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.472456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.472508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.646303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.646354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.851021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.851072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:18.674193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:18.674222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.133550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.133602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.167612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.167767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.410336       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.410541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.515934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.516006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.540843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.540926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.825617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.825717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I1105 18:13:32.157389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1607d6ea7a3] <==
	W1105 18:10:03.671887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 18:10:03.671970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 18:10:03.672285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:10:03.672503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.672829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672954       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 18:10:03.673005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:10:03.673161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 18:10:03.673298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.673427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.703301       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 18:10:03.703348       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1105 18:10:27.397168       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:11:49.191240       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	E1105 18:11:49.193101       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	I1105 18:11:49.193402       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-28tbv" node="ha-213000-m04"
	I1105 18:12:13.753881       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1105 18:12:13.756404       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1105 18:12:13.756765       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 05 18:14:23 ha-213000 kubelet[1575]: E1105 18:14:23.047096    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.299353    1575 apiserver.go:52] "Watching apiserver"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.401536    1575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.426959    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-xtables-lock\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427025    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-cni-cfg\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427041    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-lib-modules\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427052    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e7f00930-b382-473c-be59-04504c6e23ff-tmp\") pod \"storage-provisioner\" (UID: \"e7f00930-b382-473c-be59-04504c6e23ff\") " pod="kube-system/storage-provisioner"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427090    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-xtables-lock\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427103    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-lib-modules\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.446313    1575 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 05 18:14:24 ha-213000 kubelet[1575]: I1105 18:14:24.613521    1575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b"
	Nov 05 18:14:40 ha-213000 kubelet[1575]: E1105 18:14:40.279613    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971252    1575 scope.go:117] "RemoveContainer" containerID="6668904ee766d56b8d55ddf5af906befaf694e0933fdf7c8fdb3b42a676d0fb3"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971818    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: E1105 18:14:54.971979    1575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e7f00930-b382-473c-be59-04504c6e23ff)\"" pod="kube-system/storage-provisioner" podUID="e7f00930-b382-473c-be59-04504c6e23ff"
	Nov 05 18:15:08 ha-213000 kubelet[1575]: I1105 18:15:08.233582    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:15:40 ha-213000 kubelet[1575]: E1105 18:15:40.278228    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:15:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (79.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:309: expected profile "ha-213000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-213000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-213000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-213000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.9\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":fa
lse,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-213000 -n ha-213000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 logs -n 25: (3.505543415s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m04 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp testdata/cp-test.txt                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000 sudo cat                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m02 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | ha-213000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-213000 ssh -n ha-213000-m03 sudo cat                                                                                      | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-213000 node stop m02 -v=7                                                                                                 | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST | 05 Nov 24 10:05 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-213000 node start m02 -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:05 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000 -v=7                                                                                                       | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-213000 -v=7                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:08 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true -v=7                                                                                                | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:08 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-213000                                                                                                            | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST |                     |
	| node    | ha-213000 node delete m03 -v=7                                                                                               | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:11 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-213000 stop -v=7                                                                                                          | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:11 PST | 05 Nov 24 10:12 PST |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-213000 --wait=true                                                                                                     | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:12 PST |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-213000                                                                                                             | ha-213000 | jenkins | v1.34.0 | 05 Nov 24 10:15 PST | 05 Nov 24 10:16 PST |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 10:12:21
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 10:12:21.490688   20650 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.490996   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491002   20650 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.491006   20650 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.491183   20650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.492670   20650 out.go:352] Setting JSON to false
	I1105 10:12:21.523908   20650 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7910,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:12:21.523997   20650 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:12:21.546247   20650 out.go:177] * [ha-213000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:12:21.588131   20650 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:12:21.588174   20650 notify.go:220] Checking for updates...
	I1105 10:12:21.632868   20650 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:21.654057   20650 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:12:21.674788   20650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:12:21.696036   20650 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:12:21.717022   20650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:12:21.738560   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.739289   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.739362   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.752070   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59007
	I1105 10:12:21.752427   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.752834   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.752843   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.753115   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.753236   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.753425   20650 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:12:21.753684   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.753710   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.764480   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59009
	I1105 10:12:21.764817   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.765142   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.765158   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.765399   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.765513   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.796815   20650 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:12:21.838784   20650 start.go:297] selected driver: hyperkit
	I1105 10:12:21.838816   20650 start.go:901] validating driver "hyperkit" against &{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.839082   20650 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:12:21.839288   20650 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.839546   20650 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:12:21.851704   20650 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:12:21.858679   20650 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.858708   20650 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:12:21.864360   20650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:12:21.864394   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:21.864431   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:21.864510   20650 start.go:340] cluster config:
	{Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisi
oner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:21.864624   20650 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:12:21.886086   20650 out.go:177] * Starting "ha-213000" primary control-plane node in "ha-213000" cluster
	I1105 10:12:21.927848   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:21.927921   20650 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 10:12:21.927965   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:21.928204   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:21.928223   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:21.928393   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:21.929303   20650 start.go:360] acquireMachinesLock for ha-213000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:21.929483   20650 start.go:364] duration metric: took 156.606µs to acquireMachinesLock for "ha-213000"
	I1105 10:12:21.929515   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:21.929530   20650 fix.go:54] fixHost starting: 
	I1105 10:12:21.929991   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.930022   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.941843   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59011
	I1105 10:12:21.942146   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.942523   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.942539   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.942770   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.942869   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:21.942962   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.943046   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.943124   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.944238   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.944273   20650 fix.go:112] recreateIfNeeded on ha-213000: state=Stopped err=<nil>
	I1105 10:12:21.944288   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	W1105 10:12:21.944375   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:21.965704   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000" ...
	I1105 10:12:21.986830   20650 main.go:141] libmachine: (ha-213000) Calling .Start
	I1105 10:12:21.986975   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.987000   20650 main.go:141] libmachine: (ha-213000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid
	I1105 10:12:21.988429   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.988437   20650 main.go:141] libmachine: (ha-213000) DBG | pid 20508 is in state "Stopped"
	I1105 10:12:21.988449   20650 main.go:141] libmachine: (ha-213000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid...
	I1105 10:12:21.988605   20650 main.go:141] libmachine: (ha-213000) DBG | Using UUID 1736dd54-77fc-4deb-8a00-7267ff6ac6e0
	I1105 10:12:22.098530   20650 main.go:141] libmachine: (ha-213000) DBG | Generated MAC 82:fc:3d:82:28:7c
	I1105 10:12:22.098573   20650 main.go:141] libmachine: (ha-213000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:22.098772   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098813   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1736dd54-77fc-4deb-8a00-7267ff6ac6e0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000432b70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:22.098872   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1736dd54-77fc-4deb-8a00-7267ff6ac6e0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:22.098916   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1736dd54-77fc-4deb-8a00-7267ff6ac6e0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/ha-213000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:22.098942   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:22.100556   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 DEBUG: hyperkit: Pid is 20664
	I1105 10:12:22.101143   20650 main.go:141] libmachine: (ha-213000) DBG | Attempt 0
	I1105 10:12:22.101159   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:22.101260   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:12:22.103059   20650 main.go:141] libmachine: (ha-213000) DBG | Searching for 82:fc:3d:82:28:7c in /var/db/dhcpd_leases ...
	I1105 10:12:22.103211   20650 main.go:141] libmachine: (ha-213000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:22.103230   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:22.103244   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:22.103282   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:22.103300   20650 main.go:141] libmachine: (ha-213000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6d37}
	I1105 10:12:22.103320   20650 main.go:141] libmachine: (ha-213000) DBG | Found match: 82:fc:3d:82:28:7c
	I1105 10:12:22.103326   20650 main.go:141] libmachine: (ha-213000) Calling .GetConfigRaw
	I1105 10:12:22.103333   20650 main.go:141] libmachine: (ha-213000) DBG | IP: 192.169.0.5
	I1105 10:12:22.104301   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:22.104508   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:22.104940   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:22.104951   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:22.105084   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:22.105206   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:22.105334   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105499   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:22.105662   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:22.106057   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:22.106277   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:22.106287   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:22.111841   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:22.167275   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:22.168436   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.168488   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.168505   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.168538   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.563375   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:22.563390   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:22.678087   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:22.678107   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:22.678118   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:22.678127   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:22.678997   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:22.679010   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:28.419344   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:28.419383   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:28.419395   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:28.443700   20650 main.go:141] libmachine: (ha-213000) DBG | 2024/11/05 10:12:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:12:33.165174   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:12:33.165187   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165353   20650 buildroot.go:166] provisioning hostname "ha-213000"
	I1105 10:12:33.165363   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.165462   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.165555   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.165665   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165766   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.165883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.166032   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.166168   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.166176   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000 && echo "ha-213000" | sudo tee /etc/hostname
	I1105 10:12:33.233946   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000
	
	I1105 10:12:33.233965   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.234107   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.234213   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234303   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.234419   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.234574   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.234722   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.234733   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:12:33.296276   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:12:33.296296   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:12:33.296314   20650 buildroot.go:174] setting up certificates
	I1105 10:12:33.296331   20650 provision.go:84] configureAuth start
	I1105 10:12:33.296340   20650 main.go:141] libmachine: (ha-213000) Calling .GetMachineName
	I1105 10:12:33.296489   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:33.296589   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.296674   20650 provision.go:143] copyHostCerts
	I1105 10:12:33.296705   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296779   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:12:33.296787   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:12:33.296976   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:12:33.297202   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297251   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:12:33.297256   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:12:33.297953   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:12:33.298150   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298199   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:12:33.298205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:12:33.298290   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:12:33.298468   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000 san=[127.0.0.1 192.169.0.5 ha-213000 localhost minikube]
	I1105 10:12:33.417814   20650 provision.go:177] copyRemoteCerts
	I1105 10:12:33.417886   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:12:33.417904   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.418044   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.418142   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.418231   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.418333   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:33.452233   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:12:33.452305   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 10:12:33.471837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:12:33.471904   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:12:33.491510   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:12:33.491572   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:12:33.511221   20650 provision.go:87] duration metric: took 214.877215ms to configureAuth
	I1105 10:12:33.511235   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:12:33.511399   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:33.511412   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:33.511554   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.511653   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.511767   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511859   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.511944   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.512074   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.512201   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.512209   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:12:33.567448   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:12:33.567460   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:12:33.567540   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:12:33.567552   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.567685   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.567782   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567875   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.567957   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.568105   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.568243   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.568289   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:12:33.633746   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:12:33.633770   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:33.633912   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:33.634017   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634113   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:33.634221   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:33.634373   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:33.634523   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:33.634538   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:12:35.361033   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:12:35.361047   20650 machine.go:96] duration metric: took 13.256219662s to provisionDockerMachine
	I1105 10:12:35.361058   20650 start.go:293] postStartSetup for "ha-213000" (driver="hyperkit")
	I1105 10:12:35.361081   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:12:35.361095   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.361306   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:12:35.361323   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.361415   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.361506   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.361580   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.361669   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.396970   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:12:35.400946   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:12:35.400961   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:12:35.401074   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:12:35.401496   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:12:35.401503   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:12:35.401766   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:12:35.411536   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:35.443784   20650 start.go:296] duration metric: took 82.704716ms for postStartSetup
	I1105 10:12:35.443806   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.444003   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:12:35.444016   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.444100   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.444180   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.444258   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.444349   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.477407   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:12:35.477482   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:12:35.509435   20650 fix.go:56] duration metric: took 13.580030444s for fixHost
	I1105 10:12:35.509456   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.509592   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.509688   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509776   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.509883   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.510031   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:35.510178   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I1105 10:12:35.510185   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:12:35.565839   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830355.864292832
	
	I1105 10:12:35.565852   20650 fix.go:216] guest clock: 1730830355.864292832
	I1105 10:12:35.565857   20650 fix.go:229] Guest: 2024-11-05 10:12:35.864292832 -0800 PST Remote: 2024-11-05 10:12:35.509447 -0800 PST m=+14.061466364 (delta=354.845832ms)
	I1105 10:12:35.565875   20650 fix.go:200] guest clock delta is within tolerance: 354.845832ms
	I1105 10:12:35.565882   20650 start.go:83] releasing machines lock for "ha-213000", held for 13.636511126s
	I1105 10:12:35.565900   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566049   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:35.566151   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566446   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566554   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:12:35.566709   20650 ssh_runner.go:195] Run: cat /version.json
	I1105 10:12:35.566721   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.566806   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.566888   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.566979   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567064   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.567357   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:12:35.567386   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:12:35.567477   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:12:35.567559   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:12:35.567637   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:12:35.567715   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:12:35.649786   20650 ssh_runner.go:195] Run: systemctl --version
	I1105 10:12:35.655155   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:12:35.659391   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:12:35.659449   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:12:35.672884   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:12:35.672896   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.672997   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:35.691142   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:12:35.700361   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:12:35.709604   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:12:35.709664   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:12:35.718677   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.727574   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:12:35.736665   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:12:35.745463   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:12:35.754435   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:12:35.763449   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:12:35.772263   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:12:35.781386   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:12:35.789651   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:12:35.789704   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:12:35.798805   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:12:35.807011   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:35.912193   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:12:35.927985   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:12:35.928078   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:12:35.940041   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.954880   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:12:35.969797   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:12:35.981073   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:35.992124   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:12:36.016061   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:12:36.027432   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:12:36.042843   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:12:36.045910   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:12:36.054070   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:12:36.067653   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:12:36.164803   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:12:36.262358   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:12:36.262434   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:12:36.276549   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:36.372055   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:12:38.718640   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.346585524s)
	I1105 10:12:38.718725   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:12:38.729009   20650 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 10:12:38.741745   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:38.752392   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:12:38.846699   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:12:38.961329   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.072900   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:12:39.086802   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:12:39.097743   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.205555   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:12:39.272726   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:12:39.273861   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:12:39.278279   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:12:39.278336   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:12:39.281386   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:12:39.307263   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:12:39.307378   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.325423   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:12:39.384603   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:12:39.384677   20650 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:12:39.385383   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:12:39.389204   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.398876   20650 kubeadm.go:883] updating cluster {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 10:12:39.398970   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:39.399044   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.411346   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.411370   20650 docker.go:619] Images already preloaded, skipping extraction
	I1105 10:12:39.411458   20650 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 10:12:39.424491   20650 docker.go:689] Got preloaded images: -- stdout --
	ghcr.io/kube-vip/kube-vip:v0.8.6
	kindest/kindnetd:v20241023-a345ebe4
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1105 10:12:39.424511   20650 cache_images.go:84] Images are preloaded, skipping loading
	I1105 10:12:39.424518   20650 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.2 docker true true} ...
	I1105 10:12:39.424600   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:12:39.424690   20650 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 10:12:39.458782   20650 cni.go:84] Creating CNI manager for ""
	I1105 10:12:39.458796   20650 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1105 10:12:39.458807   20650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 10:12:39.458824   20650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-213000 NodeName:ha-213000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 10:12:39.458910   20650 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-213000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 10:12:39.458922   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:12:39.459000   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:12:39.472063   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:12:39.472130   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:12:39.472197   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:12:39.480694   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:12:39.480761   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 10:12:39.488010   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I1105 10:12:39.501448   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:12:39.514699   20650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2284 bytes)
	I1105 10:12:39.528604   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:12:39.542711   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:12:39.545676   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:12:39.555042   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:12:39.651842   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:12:39.666232   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.5
	I1105 10:12:39.666245   20650 certs.go:194] generating shared ca certs ...
	I1105 10:12:39.666254   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.666455   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:12:39.666548   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:12:39.666558   20650 certs.go:256] generating profile certs ...
	I1105 10:12:39.666641   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:12:39.666660   20650 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b
	I1105 10:12:39.666677   20650 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I1105 10:12:39.768951   20650 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b ...
	I1105 10:12:39.768965   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b: {Name:mk94691c5901a2a72a9bc83f127c5282216d457c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.769986   20650 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b ...
	I1105 10:12:39.770003   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b: {Name:mk80fa552a8414775a1a2e3534b5be60adeae6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:39.770739   20650 certs.go:381] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt
	I1105 10:12:39.770972   20650 certs.go:385] copying /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.9aa46c7b -> /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key
	I1105 10:12:39.771252   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:12:39.771262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:12:39.771288   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:12:39.771314   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:12:39.771335   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:12:39.771353   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:12:39.771376   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:12:39.771395   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:12:39.771413   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:12:39.771524   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:12:39.771579   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:12:39.771588   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:12:39.771622   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:12:39.771657   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:12:39.771686   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:12:39.771750   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:12:39.771787   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:12:39.771817   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:39.771836   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:12:39.772313   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:12:39.799103   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:12:39.823713   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:12:39.848122   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:12:39.876362   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:12:39.898968   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:12:39.924496   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:12:39.975578   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:12:40.017567   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:12:40.062386   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:12:40.134510   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:12:40.170763   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 10:12:40.196135   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:12:40.201525   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:12:40.214259   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222331   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.222400   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:12:40.235959   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:12:40.247519   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:12:40.256007   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259529   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.259576   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:12:40.263770   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:12:40.272126   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:12:40.280328   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283753   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.283804   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:12:40.288095   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:12:40.296378   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:12:40.300009   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:12:40.304421   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:12:40.309440   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:12:40.314156   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:12:40.318720   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:12:40.323054   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:12:40.327653   20650 kubeadm.go:392] StartCluster: {Name:ha-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:12:40.327789   20650 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 10:12:40.338896   20650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 10:12:40.346426   20650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 10:12:40.346451   20650 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 10:12:40.346505   20650 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 10:12:40.354659   20650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:12:40.354973   20650 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-213000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355052   20650 kubeconfig.go:62] /Users/jenkins/minikube-integration/19910-17277/kubeconfig needs updating (will repair): [kubeconfig missing "ha-213000" cluster setting kubeconfig missing "ha-213000" context setting]
	I1105 10:12:40.355252   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.355659   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.355866   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 10:12:40.356225   20650 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 10:12:40.356390   20650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 10:12:40.363779   20650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I1105 10:12:40.363792   20650 kubeadm.go:597] duration metric: took 17.337248ms to restartPrimaryControlPlane
	I1105 10:12:40.363798   20650 kubeadm.go:394] duration metric: took 36.151791ms to StartCluster
	I1105 10:12:40.363807   20650 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.363904   20650 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:12:40.364287   20650 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:12:40.364493   20650 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:12:40.364506   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:12:40.364518   20650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 10:12:40.364641   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.406496   20650 out.go:177] * Enabled addons: 
	I1105 10:12:40.427423   20650 addons.go:510] duration metric: took 62.890869ms for enable addons: enabled=[]
	I1105 10:12:40.427463   20650 start.go:246] waiting for cluster config update ...
	I1105 10:12:40.427476   20650 start.go:255] writing updated cluster config ...
	I1105 10:12:40.449627   20650 out.go:201] 
	I1105 10:12:40.470603   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:40.470682   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.492690   20650 out.go:177] * Starting "ha-213000-m02" control-plane node in "ha-213000" cluster
	I1105 10:12:40.534643   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:12:40.534678   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:12:40.534889   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:12:40.534908   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:12:40.535035   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.535960   20650 start.go:360] acquireMachinesLock for ha-213000-m02: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:12:40.536081   20650 start.go:364] duration metric: took 95.311µs to acquireMachinesLock for "ha-213000-m02"
	I1105 10:12:40.536107   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:12:40.536116   20650 fix.go:54] fixHost starting: m02
	I1105 10:12:40.536544   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:40.536591   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:40.548252   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59033
	I1105 10:12:40.548561   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:40.548918   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:12:40.548932   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:40.549159   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:40.549276   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.549386   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:40.549477   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.549545   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:40.550641   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.550670   20650 fix.go:112] recreateIfNeeded on ha-213000-m02: state=Stopped err=<nil>
	I1105 10:12:40.550679   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	W1105 10:12:40.550782   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:12:40.571623   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m02" ...
	I1105 10:12:40.592623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .Start
	I1105 10:12:40.592918   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.592966   20650 main.go:141] libmachine: (ha-213000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid
	I1105 10:12:40.594491   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:40.594501   20650 main.go:141] libmachine: (ha-213000-m02) DBG | pid 20524 is in state "Stopped"
	I1105 10:12:40.594516   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid...
	I1105 10:12:40.594967   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Using UUID 8475f971-284e-486e-b8b0-772de8e0415c
	I1105 10:12:40.619713   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Generated MAC 4a:4e:c6:49:69:60
	I1105 10:12:40.619737   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:12:40.619893   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619922   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8475f971-284e-486e-b8b0-772de8e0415c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041eb70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:12:40.619952   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8475f971-284e-486e-b8b0-772de8e0415c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:12:40.619999   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8475f971-284e-486e-b8b0-772de8e0415c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/ha-213000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:12:40.620018   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:12:40.621465   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 DEBUG: hyperkit: Pid is 20673
	I1105 10:12:40.621946   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Attempt 0
	I1105 10:12:40.621963   20650 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:40.622060   20650 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20673
	I1105 10:12:40.623801   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Searching for 4a:4e:c6:49:69:60 in /var/db/dhcpd_leases ...
	I1105 10:12:40.623940   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:12:40.623961   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:12:40.623986   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:12:40.624000   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:12:40.624015   20650 main.go:141] libmachine: (ha-213000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6d62}
	I1105 10:12:40.624016   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetConfigRaw
	I1105 10:12:40.624023   20650 main.go:141] libmachine: (ha-213000-m02) DBG | Found match: 4a:4e:c6:49:69:60
	I1105 10:12:40.624043   20650 main.go:141] libmachine: (ha-213000-m02) DBG | IP: 192.169.0.6
	I1105 10:12:40.624734   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:12:40.624956   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:12:40.625445   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:12:40.625455   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:12:40.625562   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:12:40.625653   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:12:40.625748   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.625874   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:12:40.626045   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:12:40.626222   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:12:40.626362   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:12:40.626369   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:12:40.631955   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:12:40.641267   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:12:40.642527   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:40.642544   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:40.642551   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:40.642561   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.034838   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:12:41.034853   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:12:41.149888   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:12:41.149903   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:12:41.149911   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:12:41.149917   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:12:41.150684   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:12:41.150696   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:12:46.914486   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:12:46.914552   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:12:46.914564   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:12:46.937828   20650 main.go:141] libmachine: (ha-213000-m02) DBG | 2024/11/05 10:12:46 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:15.697814   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:15.697829   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.697958   20650 buildroot.go:166] provisioning hostname "ha-213000-m02"
	I1105 10:13:15.697969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.698068   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.698166   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.698262   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698349   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.698429   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.698590   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.698739   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.698748   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m02 && echo "ha-213000-m02" | sudo tee /etc/hostname
	I1105 10:13:15.770158   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m02
	
	I1105 10:13:15.770174   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.770319   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.770428   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770526   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.770623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.770785   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.770922   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.770933   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:15.838124   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:15.838139   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:15.838159   20650 buildroot.go:174] setting up certificates
	I1105 10:13:15.838166   20650 provision.go:84] configureAuth start
	I1105 10:13:15.838173   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetMachineName
	I1105 10:13:15.838309   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:15.838391   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.838477   20650 provision.go:143] copyHostCerts
	I1105 10:13:15.838504   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838551   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:15.838557   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:15.838677   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:15.838892   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.838922   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:15.838926   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:15.839007   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:15.839169   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839200   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:15.839205   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:15.839275   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:15.839440   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m02 san=[127.0.0.1 192.169.0.6 ha-213000-m02 localhost minikube]
	I1105 10:13:15.878682   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:15.878747   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:15.878761   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.878912   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.879015   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.879122   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.879221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:15.916727   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:15.916795   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:15.936280   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:15.936341   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:13:15.956339   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:15.956417   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:15.976131   20650 provision.go:87] duration metric: took 137.957663ms to configureAuth
	I1105 10:13:15.976145   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:15.976324   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:15.976339   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:15.976475   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:15.976573   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:15.976661   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976740   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:15.976813   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:15.976940   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:15.977065   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:15.977072   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:16.038725   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:16.038739   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:16.038839   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:16.038851   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.038998   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.039098   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039192   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.039283   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.039436   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.039572   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.039618   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:16.112446   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:16.112468   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:16.112623   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:16.112715   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112811   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:16.112892   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:16.113049   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:16.113223   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:16.113236   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:17.783702   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:17.783717   20650 machine.go:96] duration metric: took 37.158599705s to provisionDockerMachine
	I1105 10:13:17.783726   20650 start.go:293] postStartSetup for "ha-213000-m02" (driver="hyperkit")
	I1105 10:13:17.783733   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:17.783744   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.783939   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:17.783953   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.784616   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.785152   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.785404   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.785500   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.822226   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:17.825293   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:17.825304   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:17.825392   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:17.825532   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:17.825538   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:17.825699   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:17.832977   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:17.852599   20650 start.go:296] duration metric: took 68.865935ms for postStartSetup
	I1105 10:13:17.852645   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:17.852828   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:17.852840   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.852946   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.853034   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.853111   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.853195   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:17.891315   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:17.891389   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:17.944504   20650 fix.go:56] duration metric: took 37.408724528s for fixHost
	I1105 10:13:17.944528   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:17.944681   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:17.944779   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944880   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:17.944973   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:17.945125   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:17.945257   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I1105 10:13:17.945264   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:18.009463   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830397.963598694
	
	I1105 10:13:18.009476   20650 fix.go:216] guest clock: 1730830397.963598694
	I1105 10:13:18.009482   20650 fix.go:229] Guest: 2024-11-05 10:13:17.963598694 -0800 PST Remote: 2024-11-05 10:13:17.944519 -0800 PST m=+56.496923048 (delta=19.079694ms)
	I1105 10:13:18.009492   20650 fix.go:200] guest clock delta is within tolerance: 19.079694ms
	I1105 10:13:18.009495   20650 start.go:83] releasing machines lock for "ha-213000-m02", held for 37.47374268s
	I1105 10:13:18.009512   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.009649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:18.032281   20650 out.go:177] * Found network options:
	I1105 10:13:18.052088   20650 out.go:177]   - NO_PROXY=192.169.0.5
	W1105 10:13:18.073014   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.073053   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.073969   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074186   20650 main.go:141] libmachine: (ha-213000-m02) Calling .DriverName
	I1105 10:13:18.074319   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:13:18.074355   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	W1105 10:13:18.074369   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:18.074467   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:18.074483   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHHostname
	I1105 10:13:18.074488   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074646   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHPort
	I1105 10:13:18.074649   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074801   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.074850   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHKeyPath
	I1105 10:13:18.074993   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	I1105 10:13:18.075008   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetSSHUsername
	I1105 10:13:18.075127   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m02/id_rsa Username:docker}
	W1105 10:13:18.108947   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:18.109027   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:18.155414   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:18.155436   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.155551   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.172114   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:18.180388   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:18.188528   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.188587   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:18.196712   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.204897   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:18.213206   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:18.221579   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:18.230149   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:18.238366   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:18.246617   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:18.255037   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:18.262631   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:18.262690   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:18.270933   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:18.278375   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.375712   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:18.394397   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:18.394485   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:18.410636   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.423391   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:18.441876   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:18.452612   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.462897   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:18.485662   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:18.495897   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:18.511009   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:18.513991   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:18.521476   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:18.534868   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:18.632191   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:18.734981   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:18.735009   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:18.749050   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:18.853897   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:13:21.134871   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.28097554s)
	I1105 10:13:21.134948   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 10:13:21.146360   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.157264   20650 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 10:13:21.267741   20650 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 10:13:21.382285   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.483458   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 10:13:21.496077   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 10:13:21.506512   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:21.618640   20650 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 10:13:21.685448   20650 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 10:13:21.685559   20650 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 10:13:21.689888   20650 start.go:563] Will wait 60s for crictl version
	I1105 10:13:21.689958   20650 ssh_runner.go:195] Run: which crictl
	I1105 10:13:21.693059   20650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 10:13:21.721401   20650 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 10:13:21.721489   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.737796   20650 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 10:13:21.775162   20650 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 10:13:21.818311   20650 out.go:177]   - env NO_PROXY=192.169.0.5
	I1105 10:13:21.839158   20650 main.go:141] libmachine: (ha-213000-m02) Calling .GetIP
	I1105 10:13:21.839596   20650 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 10:13:21.844257   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:21.854347   20650 mustload.go:65] Loading cluster: ha-213000
	I1105 10:13:21.854526   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:21.854763   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.854810   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.866117   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59055
	I1105 10:13:21.866449   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.866785   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.866795   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.867005   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.867094   20650 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:13:21.867180   20650 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:21.867248   20650 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20664
	I1105 10:13:21.868436   20650 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:13:21.868696   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:21.868721   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:21.879648   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59057
	I1105 10:13:21.879951   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:21.880304   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:21.880326   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:21.880564   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:21.880680   20650 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:13:21.880800   20650 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000 for IP: 192.169.0.6
	I1105 10:13:21.880806   20650 certs.go:194] generating shared ca certs ...
	I1105 10:13:21.880817   20650 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:13:21.880976   20650 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 10:13:21.881033   20650 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 10:13:21.881041   20650 certs.go:256] generating profile certs ...
	I1105 10:13:21.881133   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key
	I1105 10:13:21.881677   20650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key.72f96614
	I1105 10:13:21.881747   20650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key
	I1105 10:13:21.881756   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 10:13:21.881777   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 10:13:21.881800   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 10:13:21.881819   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 10:13:21.881837   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 10:13:21.881855   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 10:13:21.881874   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 10:13:21.881891   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 10:13:21.881971   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 10:13:21.882008   20650 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 10:13:21.882016   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 10:13:21.882051   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 10:13:21.882090   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 10:13:21.882131   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 10:13:21.882199   20650 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:21.882240   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:21.882262   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem -> /usr/share/ca-certificates/17842.pem
	I1105 10:13:21.882285   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /usr/share/ca-certificates/178422.pem
	I1105 10:13:21.882314   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:13:21.882395   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:13:21.882480   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:13:21.882563   20650 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:13:21.882639   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:13:21.908416   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 10:13:21.911559   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 10:13:21.921605   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 10:13:21.924753   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 10:13:21.933495   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 10:13:21.936611   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 10:13:21.945312   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 10:13:21.948273   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 10:13:21.957659   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 10:13:21.960739   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 10:13:21.969191   20650 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 10:13:21.972356   20650 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1105 10:13:21.981306   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 10:13:22.001469   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 10:13:22.021181   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 10:13:22.040587   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 10:13:22.060078   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1105 10:13:22.079285   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 10:13:22.098538   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 10:13:22.118296   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 10:13:22.137769   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 10:13:22.156929   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 10:13:22.176353   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 10:13:22.195510   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 10:13:22.209194   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 10:13:22.222827   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 10:13:22.236546   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 10:13:22.250070   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 10:13:22.263444   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1105 10:13:22.276970   20650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 10:13:22.290700   20650 ssh_runner.go:195] Run: openssl version
	I1105 10:13:22.294935   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 10:13:22.304164   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307578   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.307635   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 10:13:22.311940   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 10:13:22.320904   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 10:13:22.329872   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333271   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.333318   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 10:13:22.337523   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 10:13:22.346681   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 10:13:22.355874   20650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359764   20650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.359823   20650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 10:13:22.364168   20650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 10:13:22.373288   20650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 10:13:22.376713   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 10:13:22.381681   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 10:13:22.386495   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 10:13:22.390985   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 10:13:22.395318   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 10:13:22.399578   20650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 10:13:22.403998   20650 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.2 docker true true} ...
	I1105 10:13:22.404052   20650 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-213000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-213000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 10:13:22.404067   20650 kube-vip.go:115] generating kube-vip config ...
	I1105 10:13:22.404115   20650 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1105 10:13:22.417096   20650 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 10:13:22.417139   20650 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 10:13:22.417203   20650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 10:13:22.426058   20650 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 10:13:22.426117   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 10:13:22.434774   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1105 10:13:22.448444   20650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 10:13:22.461910   20650 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I1105 10:13:22.475772   20650 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I1105 10:13:22.478602   20650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 10:13:22.487944   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.594180   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.608389   20650 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:13:22.608597   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:22.629533   20650 out.go:177] * Verifying Kubernetes components...
	I1105 10:13:22.671507   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:22.795219   20650 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 10:13:22.807186   20650 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:13:22.807391   20650 kapi.go:59] client config for ha-213000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xbe1de20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 10:13:22.807429   20650 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I1105 10:13:22.807616   20650 node_ready.go:35] waiting up to 6m0s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:22.807698   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:22.807704   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:22.807711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:22.807714   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.750948   20650 round_trippers.go:574] Response Status: 200 OK in 8943 milliseconds
	I1105 10:13:31.752572   20650 node_ready.go:49] node "ha-213000-m02" has status "Ready":"True"
	I1105 10:13:31.752585   20650 node_ready.go:38] duration metric: took 8.945035646s for node "ha-213000-m02" to be "Ready" ...
	I1105 10:13:31.752614   20650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:31.752661   20650 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 10:13:31.752671   20650 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 10:13:31.752720   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:31.752727   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.752733   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.752738   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.802951   20650 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I1105 10:13:31.809829   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.809889   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-cv2cc
	I1105 10:13:31.809894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.809900   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.809904   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.814415   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:31.815355   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.815363   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.815369   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.815373   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.822380   20650 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 10:13:31.822662   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.822672   20650 pod_ready.go:82] duration metric: took 12.826683ms for pod "coredns-7c65d6cfc9-cv2cc" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822679   20650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.822728   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-q96rw
	I1105 10:13:31.822733   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.822739   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.822744   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.826328   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.826822   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.826831   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.826837   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.826841   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.829860   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.830181   20650 pod_ready.go:93] pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.830191   20650 pod_ready.go:82] duration metric: took 7.507226ms for pod "coredns-7c65d6cfc9-q96rw" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830198   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.830235   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000
	I1105 10:13:31.830240   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.830245   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.830252   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.832219   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:31.832697   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:31.832706   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.832711   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.832715   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.835276   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.835692   20650 pod_ready.go:93] pod "etcd-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.835701   20650 pod_ready.go:82] duration metric: took 5.498306ms for pod "etcd-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835709   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.835747   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m02
	I1105 10:13:31.835752   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.835758   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.835762   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.841537   20650 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 10:13:31.841973   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:31.841981   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.841986   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.841990   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844531   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:31.844869   20650 pod_ready.go:93] pod "etcd-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:31.844879   20650 pod_ready.go:82] duration metric: took 9.164525ms for pod "etcd-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844885   20650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:31.844921   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-213000-m03
	I1105 10:13:31.844926   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.844931   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.844936   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.848600   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:31.952821   20650 request.go:632] Waited for 103.696334ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952860   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:31.952865   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:31.952873   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:31.952877   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:31.955043   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:31.955226   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955236   20650 pod_ready.go:82] duration metric: took 110.346207ms for pod "etcd-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:31.955242   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "etcd-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:31.955257   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.153855   20650 request.go:632] Waited for 198.56381ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153901   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000
	I1105 10:13:32.153906   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.153912   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.153915   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.156326   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.354721   20650 request.go:632] Waited for 197.883079ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354800   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:32.354808   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.354816   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.354821   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.357314   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.357758   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.357771   20650 pod_ready.go:82] duration metric: took 402.50745ms for pod "kube-apiserver-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.357779   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.554904   20650 request.go:632] Waited for 197.060501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555009   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m02
	I1105 10:13:32.555040   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.555059   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.555071   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.562819   20650 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 10:13:32.752788   20650 request.go:632] Waited for 189.599558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752820   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:32.752825   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.752864   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.752870   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.755075   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:32.755378   20650 pod_ready.go:93] pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:32.755387   20650 pod_ready.go:82] duration metric: took 397.605979ms for pod "kube-apiserver-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.755394   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:32.952787   20650 request.go:632] Waited for 197.357502ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952836   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-213000-m03
	I1105 10:13:32.952842   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:32.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:32.952853   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:32.955636   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.153249   20650 request.go:632] Waited for 196.999871ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153317   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:33.153323   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.153329   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.153334   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.155712   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:33.155782   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155797   20650 pod_ready.go:82] duration metric: took 400.400564ms for pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:33.155804   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-apiserver-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:33.155810   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.353944   20650 request.go:632] Waited for 198.075152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354021   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000
	I1105 10:13:33.354033   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.354041   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.354047   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.356715   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.553130   20650 request.go:632] Waited for 196.01942ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553198   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:33.553204   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.553237   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.553242   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.555527   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:33.555890   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.555899   20650 pod_ready.go:82] duration metric: took 400.086552ms for pod "kube-controller-manager-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.555906   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.752845   20650 request.go:632] Waited for 196.894857ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752909   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m02
	I1105 10:13:33.752915   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.752921   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.752925   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.754805   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.953311   20650 request.go:632] Waited for 197.807461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953353   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:33.953381   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:33.953389   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:33.953392   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:33.955376   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:33.955836   20650 pod_ready.go:93] pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:33.955846   20650 pod_ready.go:82] duration metric: took 399.938695ms for pod "kube-controller-manager-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:33.955855   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.153021   20650 request.go:632] Waited for 197.093812ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153060   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-213000-m03
	I1105 10:13:34.153065   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.153072   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.153075   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.155546   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:34.353423   20650 request.go:632] Waited for 197.340662ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353457   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.353463   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.353469   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.353472   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.355383   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.355495   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355514   20650 pod_ready.go:82] duration metric: took 399.657027ms for pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.355524   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-controller-manager-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.355532   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.553620   20650 request.go:632] Waited for 198.034445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553677   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5ldvg
	I1105 10:13:34.553683   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.553689   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.553694   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.555564   20650 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 10:13:34.753369   20650 request.go:632] Waited for 197.394131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753424   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:34.753431   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.753436   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.753440   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.755363   20650 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1105 10:13:34.755426   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755436   20650 pod_ready.go:82] duration metric: took 399.890345ms for pod "kube-proxy-5ldvg" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:34.755442   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-proxy-5ldvg" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:34.755446   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:34.953531   20650 request.go:632] Waited for 198.038372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953615   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m45pk
	I1105 10:13:34.953624   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:34.953631   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:34.953636   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:34.955951   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.153813   20650 request.go:632] Waited for 196.981939ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153879   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m04
	I1105 10:13:35.153894   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.153903   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.153910   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.156466   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.157099   20650 pod_ready.go:93] pod "kube-proxy-m45pk" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.157109   20650 pod_ready.go:82] duration metric: took 401.65588ms for pod "kube-proxy-m45pk" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.157117   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.354248   20650 request.go:632] Waited for 197.082179ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354294   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s52w5
	I1105 10:13:35.354302   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.354340   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.354347   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.357098   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.552778   20650 request.go:632] Waited for 195.237923ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552882   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:35.552910   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.552918   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.552923   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.555242   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.555725   20650 pod_ready.go:93] pod "kube-proxy-s52w5" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.555734   20650 pod_ready.go:82] duration metric: took 398.615884ms for pod "kube-proxy-s52w5" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.555748   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.752802   20650 request.go:632] Waited for 196.982082ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752849   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8xxj
	I1105 10:13:35.752855   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.752861   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.752865   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.755216   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.953665   20650 request.go:632] Waited for 197.923503ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953733   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:35.953742   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:35.953751   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:35.953758   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:35.955875   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:35.956268   20650 pod_ready.go:93] pod "kube-proxy-s8xxj" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:35.956277   20650 pod_ready.go:82] duration metric: took 400.526917ms for pod "kube-proxy-s8xxj" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:35.956283   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.153409   20650 request.go:632] Waited for 197.086533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153486   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000
	I1105 10:13:36.153496   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.153504   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.153513   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.156474   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.354367   20650 request.go:632] Waited for 197.602225ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354401   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000
	I1105 10:13:36.354406   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.354421   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.354441   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.356601   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.356994   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.357004   20650 pod_ready.go:82] duration metric: took 400.718541ms for pod "kube-scheduler-ha-213000" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.357011   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.554145   20650 request.go:632] Waited for 197.038016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554243   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m02
	I1105 10:13:36.554252   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.554264   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.554270   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.556774   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:36.753404   20650 request.go:632] Waited for 196.263898ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753437   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m02
	I1105 10:13:36.753442   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.753448   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.753452   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.756764   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:36.757112   20650 pod_ready.go:93] pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 10:13:36.757122   20650 pod_ready.go:82] duration metric: took 400.109512ms for pod "kube-scheduler-ha-213000-m02" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.757130   20650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	I1105 10:13:36.953514   20650 request.go:632] Waited for 196.347448ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953546   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-213000-m03
	I1105 10:13:36.953558   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:36.953565   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:36.953575   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:36.955940   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.154619   20650 request.go:632] Waited for 198.194145ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154663   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-213000-m03
	I1105 10:13:37.154669   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.154676   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.154695   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.157438   20650 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1105 10:13:37.157524   20650 pod_ready.go:98] node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157535   20650 pod_ready.go:82] duration metric: took 400.40261ms for pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace to be "Ready" ...
	E1105 10:13:37.157542   20650 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-213000-m03" hosting pod "kube-scheduler-ha-213000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-213000-m03": nodes "ha-213000-m03" not found
	I1105 10:13:37.157547   20650 pod_ready.go:39] duration metric: took 5.404967892s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 10:13:37.157569   20650 api_server.go:52] waiting for apiserver process to appear ...
	I1105 10:13:37.157646   20650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:13:37.171805   20650 api_server.go:72] duration metric: took 14.563521484s to wait for apiserver process to appear ...
	I1105 10:13:37.171821   20650 api_server.go:88] waiting for apiserver healthz status ...
	I1105 10:13:37.171836   20650 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I1105 10:13:37.176463   20650 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I1105 10:13:37.176507   20650 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I1105 10:13:37.176512   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.176518   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.176523   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.177377   20650 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1105 10:13:37.177442   20650 api_server.go:141] control plane version: v1.31.2
	I1105 10:13:37.177460   20650 api_server.go:131] duration metric: took 5.62791ms to wait for apiserver health ...
	I1105 10:13:37.177467   20650 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 10:13:37.352914   20650 request.go:632] Waited for 175.404088ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352969   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.352975   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.352982   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.352986   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.357439   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.362936   20650 system_pods.go:59] 26 kube-system pods found
	I1105 10:13:37.362960   20650 system_pods.go:61] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.362964   20650 system_pods.go:61] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.362968   20650 system_pods.go:61] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.362973   20650 system_pods.go:61] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.362980   20650 system_pods.go:61] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.362984   20650 system_pods.go:61] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.362987   20650 system_pods.go:61] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.362993   20650 system_pods.go:61] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.362999   20650 system_pods.go:61] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.363003   20650 system_pods.go:61] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.363007   20650 system_pods.go:61] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.363013   20650 system_pods.go:61] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.363016   20650 system_pods.go:61] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.363021   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.363024   20650 system_pods.go:61] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.363027   20650 system_pods.go:61] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.363030   20650 system_pods.go:61] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.363036   20650 system_pods.go:61] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 10:13:37.363040   20650 system_pods.go:61] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.363042   20650 system_pods.go:61] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.363046   20650 system_pods.go:61] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.363051   20650 system_pods.go:61] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.363055   20650 system_pods.go:61] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.363059   20650 system_pods.go:61] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.363063   20650 system_pods.go:61] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.363065   20650 system_pods.go:61] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.363070   20650 system_pods.go:74] duration metric: took 185.599377ms to wait for pod list to return data ...
	I1105 10:13:37.363076   20650 default_sa.go:34] waiting for default service account to be created ...
	I1105 10:13:37.554093   20650 request.go:632] Waited for 190.967335ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554130   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I1105 10:13:37.554138   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.554152   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.554156   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.557460   20650 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 10:13:37.557594   20650 default_sa.go:45] found service account: "default"
	I1105 10:13:37.557604   20650 default_sa.go:55] duration metric: took 194.526347ms for default service account to be created ...
	I1105 10:13:37.557612   20650 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 10:13:37.752842   20650 request.go:632] Waited for 195.185977ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752875   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I1105 10:13:37.752881   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.752902   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.752907   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.757021   20650 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 10:13:37.762493   20650 system_pods.go:86] 26 kube-system pods found
	I1105 10:13:37.762505   20650 system_pods.go:89] "coredns-7c65d6cfc9-cv2cc" [b6d32d7c-e03f-4a60-a2eb-e81042e65e49] Running
	I1105 10:13:37.762509   20650 system_pods.go:89] "coredns-7c65d6cfc9-q96rw" [cb820265-326d-4e02-b187-0f30754bcd99] Running
	I1105 10:13:37.762512   20650 system_pods.go:89] "etcd-ha-213000" [1d431f2a-8064-4bc9-bc70-913243f83645] Running
	I1105 10:13:37.762517   20650 system_pods.go:89] "etcd-ha-213000-m02" [da6eb444-2c2a-4c8a-82ab-13a543bf0fa0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 10:13:37.762521   20650 system_pods.go:89] "etcd-ha-213000-m03" [c436cc0a-5d4c-473d-90cb-fb3b834c9619] Running
	I1105 10:13:37.762525   20650 system_pods.go:89] "kindnet-hppzk" [3f615ca1-027e-42fe-ad0c-943f7686805f] Running
	I1105 10:13:37.762528   20650 system_pods.go:89] "kindnet-p4bx6" [6a97ae24-e5b5-40a7-b5b0-9f15bcf4240a] Running
	I1105 10:13:37.762532   20650 system_pods.go:89] "kindnet-pf9hr" [320af5ac-d6b6-4fc4-ac52-1b35b9c81ce7] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1105 10:13:37.762535   20650 system_pods.go:89] "kindnet-trfhn" [6f39544f-a014-444c-8ad7-779e1940d254] Running
	I1105 10:13:37.762539   20650 system_pods.go:89] "kube-apiserver-ha-213000" [a32fee4d-29c9-4919-9554-351393c17408] Running
	I1105 10:13:37.762543   20650 system_pods.go:89] "kube-apiserver-ha-213000-m02" [0e69e69b-f4a1-4c5b-a78b-d18411aecae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 10:13:37.762548   20650 system_pods.go:89] "kube-apiserver-ha-213000-m03" [d02cef75-3c45-45bb-b7ec-3f499d518930] Running
	I1105 10:13:37.762551   20650 system_pods.go:89] "kube-controller-manager-ha-213000" [0405dcb5-6322-47fe-b929-22f12fd80b1b] Running
	I1105 10:13:37.762557   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m02" [06d77930-6b69-471d-9139-f454d903c918] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 10:13:37.762561   20650 system_pods.go:89] "kube-controller-manager-ha-213000-m03" [5dfd056c-cf27-470b-9d96-cf1ae48c02cd] Running
	I1105 10:13:37.762566   20650 system_pods.go:89] "kube-proxy-5ldvg" [945c7b43-9b2e-4610-b203-74c4b971e981] Running
	I1105 10:13:37.762569   20650 system_pods.go:89] "kube-proxy-m45pk" [2732aa1d-d316-4fa3-9ae3-9c1f8dd32864] Running
	I1105 10:13:37.762572   20650 system_pods.go:89] "kube-proxy-s52w5" [08e6c33b-72c8-4277-9d0f-c8257490cc64] Running
	I1105 10:13:37.762575   20650 system_pods.go:89] "kube-proxy-s8xxj" [416d3e9e-efe2-42fe-9a62-6bf5ebc884ae] Running
	I1105 10:13:37.762578   20650 system_pods.go:89] "kube-scheduler-ha-213000" [ea19a8b5-3829-4b24-ac87-fd5f74b755d4] Running
	I1105 10:13:37.762583   20650 system_pods.go:89] "kube-scheduler-ha-213000-m02" [f26961d7-33d3-417a-87fd-3c6911dcb46a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 10:13:37.762590   20650 system_pods.go:89] "kube-scheduler-ha-213000-m03" [428462e8-71f8-4cd6-920b-024e83e6251e] Running
	I1105 10:13:37.762594   20650 system_pods.go:89] "kube-vip-ha-213000" [2f7711ae-51c9-48c1-9809-fa70c5a50885] Running
	I1105 10:13:37.762596   20650 system_pods.go:89] "kube-vip-ha-213000-m02" [bb20bc57-fecb-4ff7-937e-59d4a6303c32] Running
	I1105 10:13:37.762600   20650 system_pods.go:89] "kube-vip-ha-213000-m03" [4589347d-3131-41ad-822d-d41f3e03a634] Running
	I1105 10:13:37.762602   20650 system_pods.go:89] "storage-provisioner" [e7f00930-b382-473c-be59-04504c6e23ff] Running
	I1105 10:13:37.762607   20650 system_pods.go:126] duration metric: took 204.991818ms to wait for k8s-apps to be running ...
	I1105 10:13:37.762614   20650 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 10:13:37.762682   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:13:37.777110   20650 system_svc.go:56] duration metric: took 14.491738ms WaitForService to wait for kubelet
	I1105 10:13:37.777127   20650 kubeadm.go:582] duration metric: took 15.16885159s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 10:13:37.777138   20650 node_conditions.go:102] verifying NodePressure condition ...
	I1105 10:13:37.952770   20650 request.go:632] Waited for 175.557407ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952816   20650 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I1105 10:13:37.952827   20650 round_trippers.go:469] Request Headers:
	I1105 10:13:37.952839   20650 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1105 10:13:37.952848   20650 round_trippers.go:473]     Accept: application/json, */*
	I1105 10:13:37.955592   20650 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 10:13:37.956364   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956379   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956390   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956393   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956397   20650 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 10:13:37.956399   20650 node_conditions.go:123] node cpu capacity is 2
	I1105 10:13:37.956403   20650 node_conditions.go:105] duration metric: took 179.263041ms to run NodePressure ...
	I1105 10:13:37.956411   20650 start.go:241] waiting for startup goroutines ...
	I1105 10:13:37.956426   20650 start.go:255] writing updated cluster config ...
	I1105 10:13:37.978800   20650 out.go:201] 
	I1105 10:13:38.000237   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:38.000353   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.022912   20650 out.go:177] * Starting "ha-213000-m04" worker node in "ha-213000" cluster
	I1105 10:13:38.065816   20650 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 10:13:38.065838   20650 cache.go:56] Caching tarball of preloaded images
	I1105 10:13:38.065942   20650 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:13:38.065952   20650 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1105 10:13:38.066024   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.066548   20650 start.go:360] acquireMachinesLock for ha-213000-m04: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:13:38.066601   20650 start.go:364] duration metric: took 39.836µs to acquireMachinesLock for "ha-213000-m04"
	I1105 10:13:38.066614   20650 start.go:96] Skipping create...Using existing machine configuration
	I1105 10:13:38.066619   20650 fix.go:54] fixHost starting: m04
	I1105 10:13:38.066839   20650 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:13:38.066859   20650 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:13:38.078183   20650 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59062
	I1105 10:13:38.078511   20650 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:13:38.078858   20650 main.go:141] libmachine: Using API Version  1
	I1105 10:13:38.078877   20650 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:13:38.079111   20650 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:13:38.079203   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.079308   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:13:38.079392   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.079457   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:13:38.080557   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.080601   20650 fix.go:112] recreateIfNeeded on ha-213000-m04: state=Stopped err=<nil>
	I1105 10:13:38.080610   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	W1105 10:13:38.080695   20650 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 10:13:38.101909   20650 out.go:177] * Restarting existing hyperkit VM for "ha-213000-m04" ...
	I1105 10:13:38.150121   20650 main.go:141] libmachine: (ha-213000-m04) Calling .Start
	I1105 10:13:38.150270   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.150297   20650 main.go:141] libmachine: (ha-213000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid
	I1105 10:13:38.151495   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:13:38.151504   20650 main.go:141] libmachine: (ha-213000-m04) DBG | pid 20571 is in state "Stopped"
	I1105 10:13:38.151536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid...
	I1105 10:13:38.151981   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Using UUID 70721578-92b7-4edc-935c-43ebcacd790c
	I1105 10:13:38.175524   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Generated MAC 1a:a3:f2:a5:2e:39
	I1105 10:13:38.175551   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000
	I1105 10:13:38.175756   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175805   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"70721578-92b7-4edc-935c-43ebcacd790c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000434bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:13:38.175883   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "70721578-92b7-4edc-935c-43ebcacd790c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/
machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"}
	I1105 10:13:38.175929   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 70721578-92b7-4edc-935c-43ebcacd790c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/ha-213000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-213000"
	I1105 10:13:38.175943   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:13:38.177358   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 DEBUG: hyperkit: Pid is 20690
	I1105 10:13:38.177760   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Attempt 0
	I1105 10:13:38.177775   20650 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:13:38.177790   20650 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20690
	I1105 10:13:38.179817   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Searching for 1a:a3:f2:a5:2e:39 in /var/db/dhcpd_leases ...
	I1105 10:13:38.179881   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I1105 10:13:38.179891   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:13:38.179930   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:13:38.179944   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:13:38.179961   20650 main.go:141] libmachine: (ha-213000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6ddd}
	I1105 10:13:38.179966   20650 main.go:141] libmachine: (ha-213000-m04) DBG | Found match: 1a:a3:f2:a5:2e:39
	I1105 10:13:38.179974   20650 main.go:141] libmachine: (ha-213000-m04) DBG | IP: 192.169.0.8
	I1105 10:13:38.180001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetConfigRaw
	I1105 10:13:38.180736   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:38.180968   20650 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/ha-213000/config.json ...
	I1105 10:13:38.181459   20650 machine.go:93] provisionDockerMachine start ...
	I1105 10:13:38.181471   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:38.181605   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:38.181707   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:38.181828   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.181929   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:38.182026   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:38.182165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:38.182315   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:38.182325   20650 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 10:13:38.188897   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:13:38.198428   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:13:38.199856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.199886   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.199916   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.199953   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.594841   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:13:38.594856   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:13:38.709716   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:13:38.709736   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:13:38.709743   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:13:38.709759   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:13:38.710592   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:13:38.710604   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:13:44.475519   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:13:44.475536   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:13:44.475546   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:13:44.498793   20650 main.go:141] libmachine: (ha-213000-m04) DBG | 2024/11/05 10:13:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:13:49.237329   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1105 10:13:49.237349   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237488   20650 buildroot.go:166] provisioning hostname "ha-213000-m04"
	I1105 10:13:49.237500   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.237590   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.237684   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.237765   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237842   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.237935   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.238078   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.238220   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.238229   20650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-213000-m04 && echo "ha-213000-m04" | sudo tee /etc/hostname
	I1105 10:13:49.297417   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-213000-m04
	
	I1105 10:13:49.297437   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.297576   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.297673   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297757   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.297853   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.297997   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.298162   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.298173   20650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-213000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-213000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-213000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:13:49.354308   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:13:49.354323   20650 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:13:49.354341   20650 buildroot.go:174] setting up certificates
	I1105 10:13:49.354349   20650 provision.go:84] configureAuth start
	I1105 10:13:49.354357   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetMachineName
	I1105 10:13:49.354507   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:49.354606   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.354711   20650 provision.go:143] copyHostCerts
	I1105 10:13:49.354741   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354793   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:13:49.354799   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:13:49.354909   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:13:49.355124   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355155   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:13:49.355159   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:13:49.355228   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:13:49.355419   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355454   20650 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:13:49.355461   20650 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:13:49.355528   20650 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:13:49.355690   20650 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.ha-213000-m04 san=[127.0.0.1 192.169.0.8 ha-213000-m04 localhost minikube]
	I1105 10:13:49.396705   20650 provision.go:177] copyRemoteCerts
	I1105 10:13:49.396767   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:13:49.396780   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.396910   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.397015   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.397117   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.397221   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:49.427813   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 10:13:49.427885   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:13:49.447457   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 10:13:49.447518   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 10:13:49.467286   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 10:13:49.467359   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 10:13:49.487192   20650 provision.go:87] duration metric: took 132.83626ms to configureAuth
	I1105 10:13:49.487209   20650 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:13:49.487380   20650 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:13:49.487394   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:49.487531   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.487631   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.487715   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487801   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.487890   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.488033   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.488154   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.488162   20650 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:13:49.537465   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:13:49.537478   20650 buildroot.go:70] root file system type: tmpfs
	I1105 10:13:49.537561   20650 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:13:49.537571   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.537704   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.537799   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537884   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.537998   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.538165   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.538298   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.538345   20650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:13:49.598479   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:13:49.598502   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:49.598649   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:49.598747   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598833   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:49.598947   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:49.599089   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:49.599234   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:49.599246   20650 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:13:51.207763   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:13:51.207782   20650 machine.go:96] duration metric: took 13.026432223s to provisionDockerMachine
	I1105 10:13:51.207792   20650 start.go:293] postStartSetup for "ha-213000-m04" (driver="hyperkit")
	I1105 10:13:51.207801   20650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:13:51.207816   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.208031   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:13:51.208047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.208140   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.208231   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.208318   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.208438   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.241123   20650 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:13:51.244240   20650 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:13:51.244251   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:13:51.244336   20650 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:13:51.244477   20650 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:13:51.244484   20650 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> /etc/ssl/certs/178422.pem
	I1105 10:13:51.244646   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:13:51.252753   20650 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:13:51.271782   20650 start.go:296] duration metric: took 63.980744ms for postStartSetup
	I1105 10:13:51.271803   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.271989   20650 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1105 10:13:51.272001   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.272093   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.272178   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.272277   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.272371   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.304392   20650 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1105 10:13:51.304469   20650 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1105 10:13:51.358605   20650 fix.go:56] duration metric: took 13.292102469s for fixHost
	I1105 10:13:51.358630   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.358783   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.358880   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.358963   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.359053   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.359195   20650 main.go:141] libmachine: Using SSH client type: native
	I1105 10:13:51.359329   20650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa27c620] 0xa27f300 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I1105 10:13:51.359336   20650 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:13:51.407868   20650 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730830431.709090009
	
	I1105 10:13:51.407885   20650 fix.go:216] guest clock: 1730830431.709090009
	I1105 10:13:51.407890   20650 fix.go:229] Guest: 2024-11-05 10:13:51.709090009 -0800 PST Remote: 2024-11-05 10:13:51.35862 -0800 PST m=+89.911326584 (delta=350.470009ms)
	I1105 10:13:51.407901   20650 fix.go:200] guest clock delta is within tolerance: 350.470009ms
	I1105 10:13:51.407906   20650 start.go:83] releasing machines lock for "ha-213000-m04", held for 13.34141889s
	I1105 10:13:51.407923   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.408055   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:13:51.430524   20650 out.go:177] * Found network options:
	I1105 10:13:51.451633   20650 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W1105 10:13:51.472140   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.472164   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.472179   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472739   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.472888   20650 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:13:51.473015   20650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W1105 10:13:51.473025   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 10:13:51.473039   20650 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 10:13:51.473047   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473124   20650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 10:13:51.473137   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:13:51.473175   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473286   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473299   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:13:51.473387   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:13:51.473400   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473487   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:13:51.473517   20650 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:13:51.473599   20650 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	W1105 10:13:51.501432   20650 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:13:51.501515   20650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 10:13:51.553972   20650 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:13:51.553993   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.554083   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.569365   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1105 10:13:51.577607   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:13:51.586014   20650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:13:51.586084   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:13:51.594293   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.602646   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:13:51.610969   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:13:51.619400   20650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:13:51.627741   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:13:51.635982   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1105 10:13:51.645401   20650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1105 10:13:51.653565   20650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:13:51.660899   20650 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:13:51.660963   20650 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:13:51.669419   20650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:13:51.677143   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:51.772664   20650 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:13:51.792178   20650 start.go:495] detecting cgroup driver to use...
	I1105 10:13:51.792270   20650 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:13:51.808083   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.820868   20650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:13:51.842221   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:13:51.854583   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.865539   20650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:13:51.892869   20650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:13:51.904042   20650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:13:51.922494   20650 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:13:51.928520   20650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:13:51.945780   20650 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1105 10:13:51.962437   20650 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:13:52.060460   20650 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:13:52.163232   20650 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:13:52.163260   20650 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:13:52.178328   20650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:13:52.296397   20650 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:14:53.349067   20650 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.016016812s)
	I1105 10:14:53.349159   20650 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:14:53.385876   20650 out.go:201] 
	W1105 10:14:53.422606   20650 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:13:50 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.219562799Z" level=info msg="Starting up"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220058811Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:13:50 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:50.220520378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.236571587Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251944562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.251994240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252044391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252055761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252195060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252229740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252349558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252384866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252397240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252405229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252524569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.252724198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254281313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254319501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254483547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254518416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254637452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.254682187Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256614572Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256700357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256735425Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256747481Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256756858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.256872356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257148179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257222801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257256207Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257270046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257279834Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257288340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257296529Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257305718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257315275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257323861Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257331966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257341123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257353483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257369189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257380484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257389307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257399701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257408788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257416371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257425618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257434996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257444348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257451686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257459575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257467078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257476277Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257490077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257498560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257506719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257553863Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257589606Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257600230Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257608504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257615175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257802193Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.257837950Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258034640Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258090699Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258116806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:13:50 ha-213000-m04 dockerd[497]: time="2024-11-05T18:13:50.258155872Z" level=info msg="containerd successfully booted in 0.022413s"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.237413687Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.251112258Z" level=info msg="Loading containers: start."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.367445130Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.434506480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479458634Z" level=warning msg="error locating sandbox id 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea: sandbox 55273876f8900a143c9b7392b9ea2b20e10c07e26f18646ec50efaaacc9ac6ea not found"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.479805760Z" level=info msg="Loading containers: done."
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487402038Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487478220Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487513470Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.487665655Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507740899Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:13:51 ha-213000-m04 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:13:51 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:51.507861455Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.610071512Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611439931Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611626935Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611699035Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:13:52 ha-213000-m04 dockerd[491]: time="2024-11-05T18:13:52.611737953Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:13:52 ha-213000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:13:53 ha-213000-m04 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:13:53 ha-213000-m04 dockerd[1131]: time="2024-11-05T18:13:53.642820469Z" level=info msg="Starting up"
	Nov 05 18:14:53 ha-213000-m04 dockerd[1131]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:14:53 ha-213000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:14:53.422674   20650 out.go:270] * 
	W1105 10:14:53.423462   20650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:14:53.533703   20650 out.go:201] 
	
	
	==> Docker <==
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.321144470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358583815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358913638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.358923588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.359308274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371019459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371180579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371195366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.371264075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384883251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384945765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.384958316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.385102977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.393595106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396454919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396464389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:24 ha-213000 dockerd[1158]: time="2024-11-05T18:14:24.396559087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:14:54 ha-213000 dockerd[1151]: time="2024-11-05T18:14:54.321538330Z" level=info msg="ignoring event" container=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322187590Z" level=info msg="shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322448589Z" level=warning msg="cleaning up after shim disconnected" id=ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9 namespace=moby
	Nov 05 18:14:54 ha-213000 dockerd[1158]: time="2024-11-05T18:14:54.322490228Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289904323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289952412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.289962172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 18:15:08 ha-213000 dockerd[1158]: time="2024-11-05T18:15:08.290120529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b4e2f8c824d26       6e38f40d628db       About a minute ago   Running             storage-provisioner       5                   7a18da25cf537       storage-provisioner
	568ed995df15d       8c811b4aec35f       About a minute ago   Running             busybox                   2                   f5d092375dddf       busybox-7dff88458-q5j74
	a54d96a8e9e4d       9ca7e41918271       About a minute ago   Running             kindnet-cni               2                   07702f76ce639       kindnet-hppzk
	820b778421b38       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   bc67a22cb5eff       coredns-7c65d6cfc9-cv2cc
	ca9011bea4440       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   703f8fe612ac5       coredns-7c65d6cfc9-q96rw
	85e7cccdf4831       505d571f5fd56       About a minute ago   Running             kube-proxy                2                   7a4f7e3a95ced       kube-proxy-s8xxj
	ea27059bb8dad       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       4                   7a18da25cf537       storage-provisioner
	43950f04c89aa       0486b6c53a1b5       2 minutes ago        Running             kube-controller-manager   4                   3c4a95766d8df       kube-controller-manager-ha-213000
	8e0c0916fca71       9499c9960544e       3 minutes ago        Running             kube-apiserver            4                   f2454c695936e       kube-apiserver-ha-213000
	897300e44633b       baf03d14a86fd       3 minutes ago        Running             kube-vip                  1                   f00a17fab8835       kube-vip-ha-213000
	ad7975173845f       847c7bc1a5418       3 minutes ago        Running             kube-scheduler            2                   5162e28d0e03d       kube-scheduler-ha-213000
	8a28e20a2bf3d       2e96e5913fc06       3 minutes ago        Running             etcd                      2                   acdca4d26c9f6       etcd-ha-213000
	ea0b432d94423       0486b6c53a1b5       3 minutes ago        Exited              kube-controller-manager   3                   3c4a95766d8df       kube-controller-manager-ha-213000
	16b5e8baed219       9499c9960544e       3 minutes ago        Exited              kube-apiserver            3                   f2454c695936e       kube-apiserver-ha-213000
	96799b06e508f       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   07d926acb1a6e       busybox-7dff88458-q5j74
	86ef547964bcb       c69fa2e9cbf5f       5 minutes ago        Exited              coredns                   1                   5fe3e01a4f33a       coredns-7c65d6cfc9-q96rw
	dd08019aca606       c69fa2e9cbf5f       5 minutes ago        Exited              coredns                   1                   00f7c155eb4b0       coredns-7c65d6cfc9-cv2cc
	4aec0d02658e0       505d571f5fd56       5 minutes ago        Exited              kube-proxy                1                   1ece5e2bcaf09       kube-proxy-s8xxj
	f9a05b099e4ee       9ca7e41918271       5 minutes ago        Exited              kindnet-cni               1                   fd311d6ed9c5c       kindnet-hppzk
	51c2df7fc859d       baf03d14a86fd       7 minutes ago        Exited              kube-vip                  0                   98323683c9082       kube-vip-ha-213000
	bdbc1a6e54924       2e96e5913fc06       7 minutes ago        Exited              etcd                      1                   474c9f706901d       etcd-ha-213000
	f1607d6ea7a30       847c7bc1a5418       7 minutes ago        Exited              kube-scheduler            1                   b217215a9cf0c       kube-scheduler-ha-213000
	
	
	==> coredns [820b778421b3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59240 - 59060 "HINFO IN 4329632244317726903.7890662898760833477. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011788676s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[675101378]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.641) (total time: 30001ms):
	Trace[675101378]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.641)
	Trace[675101378]: [30.00107355s] [30.00107355s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[792881874]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30001ms):
	Trace[792881874]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:14:54.642)
	Trace[792881874]: [30.001711346s] [30.001711346s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[34248386]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[34248386]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[34248386]: [30.000366606s] [30.000366606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [86ef547964bc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33774 - 54633 "HINFO IN 1409488340311598538.4125883895955909161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004156009s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1322590960]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1322590960]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.870)
	Trace[1322590960]: [30.003129161s] [30.003129161s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1548400132]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30002ms):
	Trace[1548400132]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:11:00.870)
	Trace[1548400132]: [30.002952972s] [30.002952972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1633349832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.870) (total time: 30002ms):
	Trace[1633349832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[1633349832]: [30.002091533s] [30.002091533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca9011bea444] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47030 - 28453 "HINFO IN 9030478600017221968.7137590874178245370. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011696462s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[954770416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.640) (total time: 30002ms):
	Trace[954770416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:14:54.642)
	Trace[954770416]: [30.002259073s] [30.002259073s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1172241105]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1172241105]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.644)
	Trace[1172241105]: [30.000198867s] [30.000198867s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1149531028]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:14:24.644) (total time: 30000ms):
	Trace[1149531028]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:14:54.645)
	Trace[1149531028]: [30.000272321s] [30.000272321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [dd08019aca60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56311 - 34269 "HINFO IN 2200850437967647570.948968209837946997. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0110095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819586440]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.868) (total time: 30001ms):
	Trace[819586440]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:11:00.870)
	Trace[819586440]: [30.001860838s] [30.001860838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[58172056]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.869) (total time: 30000ms):
	Trace[58172056]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:11:00.870)
	Trace[58172056]: [30.000759284s] [30.000759284s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1700347832]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:10:30.867) (total time: 30003ms):
	Trace[1700347832]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (18:11:00.871)
	Trace[1700347832]: [30.003960758s] [30.003960758s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-213000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T10_01_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:36 +0000   Tue, 05 Nov 2024 18:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-213000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1892e4225dd5499cb35e29ff753a0c40
	  System UUID:                17364deb-0000-0000-8a00-7267ff6ac6e0
	  Boot ID:                    872d5ac1-d893-413e-b883-f1ad425b7c82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-q5j74              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-cv2cc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-q96rw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-213000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-hppzk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-213000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-213000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-s8xxj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-213000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-213000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           14m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-213000 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  Starting                 3m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s (x8 over 3m44s)  kubelet          Node ha-213000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x8 over 3m44s)  kubelet          Node ha-213000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x7 over 3m44s)  kubelet          Node ha-213000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	  Normal  RegisteredNode           26s                    node-controller  Node ha-213000 event: Registered Node ha-213000 in Controller
	
	
	Name:               ha-213000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_02_51_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:02:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:13:34 +0000   Tue, 05 Nov 2024 18:10:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-213000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dc248d7debd421bb4108dc092da24e0
	  System UUID:                8475486e-0000-0000-b8b0-772de8e0415c
	  Boot ID:                    8a40793c-3b3c-49c9-a112-66a753c3fa07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-89r49                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-213000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-pf9hr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-213000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-213000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s52w5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-213000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-213000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-213000-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  Starting                 3m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m1s (x8 over 3m1s)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x8 over 3m1s)    kubelet          Node ha-213000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x7 over 3m1s)    kubelet          Node ha-213000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	  Normal  RegisteredNode           26s                    node-controller  Node ha-213000-m02 event: Registered Node ha-213000-m02 in Controller
	
	
	Name:               ha-213000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_04_59_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:04:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:11:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:11:42 +0000   Tue, 05 Nov 2024 18:14:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-213000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 efb6d3b228624c8f9582b78a04751815
	  System UUID:                70724edc-0000-0000-935c-43ebcacd790c
	  Boot ID:                    6405d175-8027-4e75-bb1e-1845fbf67784
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-28tbv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kindnet-p4bx6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-m45pk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 4m40s                  kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)      kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m17s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           6m16s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             5m37s                  node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           5m34s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m42s (x2 over 4m42s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m42s (x2 over 4m42s)  kubelet          Node ha-213000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m42s (x2 over 4m42s)  kubelet          Node ha-213000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m42s                  kubelet          Node ha-213000-m04 has been rebooted, boot id: 6405d175-8027-4e75-bb1e-1845fbf67784
	  Normal   NodeReady                4m42s                  kubelet          Node ha-213000-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m49s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   RegisteredNode           2m49s                  node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	  Normal   NodeNotReady             2m9s                   node-controller  Node ha-213000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           26s                    node-controller  Node ha-213000-m04 event: Registered Node ha-213000-m04 in Controller
	
	
	Name:               ha-213000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-213000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-213000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T10_15_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:15:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-213000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:16:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:16:21 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:16:21 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:16:21 +0000   Tue, 05 Nov 2024 18:15:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:16:21 +0000   Tue, 05 Nov 2024 18:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-213000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba49d86a1883402ebcff4760f7173855
	  System UUID:                39144d91-0000-0000-8f4c-e91cd4ad9fd9
	  Boot ID:                    dad28c98-204b-4595-92ed-10d65834fde9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-213000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         32s
	  kube-system                 kindnet-gncwv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      34s
	  kube-system                 kube-apiserver-ha-213000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-ha-213000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-njqc5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-ha-213000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-vip-ha-213000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node ha-213000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node ha-213000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)  kubelet          Node ha-213000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	  Normal  RegisteredNode           26s                node-controller  Node ha-213000-m05 event: Registered Node ha-213000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036175] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007972] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.844917] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000007] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006614] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.702887] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.233657] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.342806] systemd-fstab-generator[457]: Ignoring "noauto" option for root device
	[  +0.102790] systemd-fstab-generator[469]: Ignoring "noauto" option for root device
	[  +2.007272] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.269734] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.085327] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +0.060857] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.057582] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +2.475879] systemd-fstab-generator[1364]: Ignoring "noauto" option for root device
	[  +0.104726] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.119211] systemd-fstab-generator[1388]: Ignoring "noauto" option for root device
	[  +0.130514] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.455084] systemd-fstab-generator[1568]: Ignoring "noauto" option for root device
	[  +6.862189] kauditd_printk_skb: 190 callbacks suppressed
	[Nov 5 18:13] kauditd_printk_skb: 40 callbacks suppressed
	[Nov 5 18:14] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8a28e20a2bf3] <==
	{"level":"info","ts":"2024-11-05T18:14:55.084484Z","caller":"traceutil/trace.go:171","msg":"trace[689855107] transaction","detail":"{read_only:false; response_revision:2931; number_of_response:1; }","duration":"110.3233ms","start":"2024-11-05T18:14:54.974150Z","end":"2024-11-05T18:14:55.084473Z","steps":["trace[689855107] 'process raft request'  (duration: 110.263526ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T18:15:51.034889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6366593563784330242 13314548521573537860) learners=(8641313866221225839)"}
	{"level":"info","ts":"2024-11-05T18:15:51.035473Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"77ec1d2d7cc6076f","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-11-05T18:15:51.035571Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.035712Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.036421Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.036659Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037105Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037265Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-11-05T18:15:51.037295Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:51.037162Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"warn","ts":"2024-11-05T18:15:51.070004Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"77ec1d2d7cc6076f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-11-05T18:15:51.205387Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.9:2380/version","remote-member-id":"77ec1d2d7cc6076f","error":"Get \"https://192.169.0.9:2380/version\": dial tcp 192.169.0.9:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:15:51.205445Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"77ec1d2d7cc6076f","error":"Get \"https://192.169.0.9:2380/version\": dial tcp 192.169.0.9:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-11-05T18:15:51.564464Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"77ec1d2d7cc6076f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-11-05T18:15:52.011350Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"77ec1d2d7cc6076f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-11-05T18:15:52.011393Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.011407Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.016006Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"77ec1d2d7cc6076f","stream-type":"stream Message"}
	{"level":"info","ts":"2024-11-05T18:15:52.016118Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.027894Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.031268Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"77ec1d2d7cc6076f"}
	{"level":"info","ts":"2024-11-05T18:15:52.565744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6366593563784330242 8641313866221225839 13314548521573537860)"}
	{"level":"info","ts":"2024-11-05T18:15:52.565834Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-11-05T18:15:52.565950Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"77ec1d2d7cc6076f"}
	
	
	==> etcd [bdbc1a6e5492] <==
	{"level":"warn","ts":"2024-11-05T18:12:13.699058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:09.275669Z","time spent":"4.423385981s","remote":"127.0.0.1:52268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:13.283499Z","time spent":"415.604721ms","remote":"127.0.0.1:52350","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.487277082s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699158Z","caller":"traceutil/trace.go:171","msg":"trace[1772748615] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"7.487289106s","start":"2024-11-05T18:12:06.211867Z","end":"2024-11-05T18:12:13.699156Z","steps":["trace[1772748615] 'agreement among raft nodes before linearized reading'  (duration: 7.487277083s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699169Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:06.211838Z","time spent":"7.487327421s","remote":"127.0.0.1:52456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.699211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.037776693s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:12:13.699221Z","caller":"traceutil/trace.go:171","msg":"trace[763418090] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"2.037787826s","start":"2024-11-05T18:12:11.661430Z","end":"2024-11-05T18:12:13.699218Z","steps":["trace[763418090] 'agreement among raft nodes before linearized reading'  (duration: 2.037776524s)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T18:12:13.699230Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-05T18:12:11.661414Z","time spent":"2.03781384s","remote":"127.0.0.1:52228","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":0,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	2024/11/05 18:12:13 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-11-05T18:12:13.734339Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-11-05T18:12:13.734385Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-11-05T18:12:13.734444Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-11-05T18:12:13.734706Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734723Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734737Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734820Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734866Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.734875Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"585aaf1d56a73c02"}
	{"level":"info","ts":"2024-11-05T18:12:13.735810Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735871Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-11-05T18:12:13.735879Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-213000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 18:16:24 up 4 min,  0 users,  load average: 0.31, 0.16, 0.07
	Linux ha-213000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a54d96a8e9e4] <==
	I1105 18:15:55.792775       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:15:55.792888       1 main.go:301] handling current node
	I1105 18:15:55.792908       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:15:55.792917       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:15:55.793456       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:15:55.793579       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:15:55.793978       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:15:55.794105       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	I1105 18:15:55.794696       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0 Realm: 0} 
	I1105 18:16:05.793158       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:16:05.793224       1 main.go:301] handling current node
	I1105 18:16:05.793241       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:16:05.793581       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:16:05.794428       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:16:05.794608       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:16:05.795140       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:16:05.795336       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	I1105 18:16:15.792658       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:16:15.792706       1 main.go:301] handling current node
	I1105 18:16:15.792719       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:16:15.792725       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:16:15.793418       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:16:15.793485       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:16:15.797231       1 main.go:297] Handling node with IPs: map[192.169.0.9:{}]
	I1105 18:16:15.797258       1 main.go:324] Node ha-213000-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f9a05b099e4e] <==
	I1105 18:11:41.574590       1 main.go:301] handling current node
	I1105 18:11:41.574599       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:41.574604       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:41.574749       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:41.574789       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567175       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:11:51.567282       1 main.go:301] handling current node
	I1105 18:11:51.567311       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:11:51.567325       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:11:51.567514       1 main.go:297] Handling node with IPs: map[192.169.0.7:{}]
	I1105 18:11:51.567574       1 main.go:324] Node ha-213000-m03 has CIDR [10.244.2.0/24] 
	I1105 18:11:51.567879       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:11:51.567959       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:01.566316       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:01.566340       1 main.go:301] handling current node
	I1105 18:12:01.566353       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:01.566358       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:01.566565       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:01.566573       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	I1105 18:12:11.571151       1 main.go:297] Handling node with IPs: map[192.169.0.5:{}]
	I1105 18:12:11.571336       1 main.go:301] handling current node
	I1105 18:12:11.571478       1 main.go:297] Handling node with IPs: map[192.169.0.6:{}]
	I1105 18:12:11.571602       1 main.go:324] Node ha-213000-m02 has CIDR [10.244.1.0/24] 
	I1105 18:12:11.572596       1 main.go:297] Handling node with IPs: map[192.169.0.8:{}]
	I1105 18:12:11.572626       1 main.go:324] Node ha-213000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [16b5e8baed21] <==
	I1105 18:12:47.610850       1 options.go:228] external host was not specified, using 192.169.0.5
	I1105 18:12:47.613755       1 server.go:142] Version: v1.31.2
	I1105 18:12:47.614011       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.895871       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1105 18:12:48.898884       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:12:48.901520       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1105 18:12:48.901573       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1105 18:12:48.902234       1 instance.go:232] Using reconciler: lease
	W1105 18:13:08.892813       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1105 18:13:08.896286       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1105 18:13:08.903685       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W1105 18:13:08.903693       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [8e0c0916fca7] <==
	I1105 18:13:32.048504       1 establishing_controller.go:81] Starting EstablishingController
	I1105 18:13:32.048599       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1105 18:13:32.048646       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1105 18:13:32.048673       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1105 18:13:32.111932       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:13:32.112352       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:13:32.112415       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:13:32.112712       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:13:32.112790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:13:32.115714       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:13:32.115760       1 policy_source.go:224] refreshing policies
	I1105 18:13:32.115832       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:13:32.118673       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:13:32.126538       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:13:32.129328       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:13:32.136801       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:13:32.137650       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:13:32.137679       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:13:32.137683       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:13:32.137688       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:13:32.144136       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1105 18:13:32.162460       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1105 18:13:33.018201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:13:33.274965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1105 18:14:23.399590       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [43950f04c89a] <==
	I1105 18:15:03.683973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.253624ms"
	I1105 18:15:03.684142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.592µs"
	E1105 18:15:50.681201       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9ljhr failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9ljhr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1105 18:15:50.684700       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9ljhr failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9ljhr\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1105 18:15:50.808750       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-213000-m05\" does not exist"
	I1105 18:15:50.821950       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-213000-m05" podCIDRs=["10.244.2.0/24"]
	I1105 18:15:50.821995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:50.822017       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:50.837924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:51.008189       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:52.758496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:53.381023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:53.483104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:55.535311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:55.535903       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-213000-m05"
	I1105 18:15:58.376422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:15:58.431221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:15:58.475735       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m04"
	I1105 18:16:00.948091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:05.645035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:08.579705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.430941       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.443871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:10.527913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	I1105 18:16:21.479988       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-213000-m05"
	
	
	==> kube-controller-manager [ea0b432d9442] <==
	I1105 18:12:48.246520       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:12:48.777745       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:12:48.777814       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:12:48.783136       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:12:48.783574       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:12:48.783729       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:12:48.783931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1105 18:13:09.910735       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-proxy [4aec0d02658e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:10:30.967416       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:10:30.985864       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:10:30.985986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:10:31.019992       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:10:31.020085       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:10:31.020128       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:10:31.022301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:10:31.022843       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:10:31.022888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:10:31.026969       1 config.go:199] "Starting service config controller"
	I1105 18:10:31.027078       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:10:31.027666       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:10:31.027692       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:10:31.028138       1 config.go:328] "Starting node config controller"
	I1105 18:10:31.028170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:10:31.130453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:10:31.130459       1 shared_informer.go:320] Caches are synced for node config
	I1105 18:10:31.130467       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [85e7cccdf483] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 18:14:24.812805       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 18:14:24.832536       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E1105 18:14:24.832803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:14:24.864245       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 18:14:24.864284       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 18:14:24.864314       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:14:24.866476       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:14:24.868976       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:14:24.869009       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:14:24.872199       1 config.go:199] "Starting service config controller"
	I1105 18:14:24.872427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:14:24.872629       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:14:24.872656       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:14:24.874721       1 config.go:328] "Starting node config controller"
	I1105 18:14:24.874748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:14:24.974138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:14:24.974427       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:14:24.975147       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad7975173845] <==
	W1105 18:13:17.072213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.072242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.177384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.177607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.472456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.472508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.646303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.646354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:17.851021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:17.851072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:18.674193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:18.674222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.133550       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.133602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.167612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.167767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.410336       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.410541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.515934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.516006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.540843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.540926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W1105 18:13:19.825617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E1105 18:13:19.825717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	I1105 18:13:32.157389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f1607d6ea7a3] <==
	W1105 18:10:03.671887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 18:10:03.671970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 18:10:03.672285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 18:10:03.672503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.672829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.672954       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 18:10:03.673005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 18:10:03.673161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1105 18:10:03.673298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.673406       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1105 18:10:03.673427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 18:10:03.703301       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 18:10:03.703348       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1105 18:10:27.397168       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:11:49.191240       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	E1105 18:11:49.193101       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 4d4e0a71-45f0-4857-9394-23fc0a602fbe(default/busybox-7dff88458-28tbv) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-28tbv"
	I1105 18:11:49.193402       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-28tbv" node="ha-213000-m04"
	I1105 18:12:13.753881       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1105 18:12:13.756404       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1105 18:12:13.756765       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 05 18:14:23 ha-213000 kubelet[1575]: E1105 18:14:23.047096    1575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-213000\" not found"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.299353    1575 apiserver.go:52] "Watching apiserver"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.401536    1575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.426959    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-xtables-lock\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427025    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-cni-cfg\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427041    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f615ca1-027e-42fe-ad0c-943f7686805f-lib-modules\") pod \"kindnet-hppzk\" (UID: \"3f615ca1-027e-42fe-ad0c-943f7686805f\") " pod="kube-system/kindnet-hppzk"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427052    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e7f00930-b382-473c-be59-04504c6e23ff-tmp\") pod \"storage-provisioner\" (UID: \"e7f00930-b382-473c-be59-04504c6e23ff\") " pod="kube-system/storage-provisioner"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427090    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-xtables-lock\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.427103    1575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416d3e9e-efe2-42fe-9a62-6bf5ebc884ae-lib-modules\") pod \"kube-proxy-s8xxj\" (UID: \"416d3e9e-efe2-42fe-9a62-6bf5ebc884ae\") " pod="kube-system/kube-proxy-s8xxj"
	Nov 05 18:14:23 ha-213000 kubelet[1575]: I1105 18:14:23.446313    1575 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 05 18:14:24 ha-213000 kubelet[1575]: I1105 18:14:24.613521    1575 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d092375dddf0b7f9bff69a9a92be66e07e3d879f6ff178fa881b4b5fde381b"
	Nov 05 18:14:40 ha-213000 kubelet[1575]: E1105 18:14:40.279613    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:14:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:14:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971252    1575 scope.go:117] "RemoveContainer" containerID="6668904ee766d56b8d55ddf5af906befaf694e0933fdf7c8fdb3b42a676d0fb3"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: I1105 18:14:54.971818    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:14:54 ha-213000 kubelet[1575]: E1105 18:14:54.971979    1575 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e7f00930-b382-473c-be59-04504c6e23ff)\"" pod="kube-system/storage-provisioner" podUID="e7f00930-b382-473c-be59-04504c6e23ff"
	Nov 05 18:15:08 ha-213000 kubelet[1575]: I1105 18:15:08.233582    1575 scope.go:117] "RemoveContainer" containerID="ea27059bb8dadb6e9cba0fafbbf6eee76cd2b55595a760336a239433c960dde9"
	Nov 05 18:15:40 ha-213000 kubelet[1575]: E1105 18:15:40.278228    1575 iptables.go:577] "Could not set up iptables canary" err=<
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 05 18:15:40 ha-213000 kubelet[1575]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 05 18:15:40 ha-213000 kubelet[1575]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-213000 -n ha-213000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-213000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-040000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E1105 10:20:57.099375   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:21:31.157655   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-040000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.866502512s)

                                                
                                                
-- stdout --
	* [mount-start-1-040000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-040000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-040000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:6c:da:20:dd:dd
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-040000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:59:72:9e:be:02
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b2:59:72:9e:be:02
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-040000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-040000 -n mount-start-1-040000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-040000 -n mount-start-1-040000: exit status 7 (97.572642ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:23:02.300047   21241 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:23:02.300072   21241 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-040000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.96s)

                                                
                                    
x
+
TestScheduledStopUnix (142.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-963000 --memory=2048 --driver=hyperkit 
E1105 10:37:37.197025   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-963000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.709660293s)

                                                
                                                
-- stdout --
	* [scheduled-stop-963000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-963000" primary control-plane node in "scheduled-stop-963000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-963000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3e:8f:c0:0a:92
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-963000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3a:ca:42:81:3f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3a:ca:42:81:3f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-963000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-963000" primary control-plane node in "scheduled-stop-963000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-963000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3e:8f:c0:0a:92
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-963000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3a:ca:42:81:3f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 42:3a:ca:42:81:3f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-11-05 10:39:15.448299 -0800 PST m=+3532.006125443
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-963000 -n scheduled-stop-963000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-963000 -n scheduled-stop-963000: exit status 7 (97.98517ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 10:39:15.544379   22565 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 10:39:15.544401   22565 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-963000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-963000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-963000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-963000: (5.278121988s)
--- FAIL: TestScheduledStopUnix (142.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (1341.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (1m16.671919807s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-498000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "kubernetes-upgrade-498000" primary control-plane node in "kubernetes-upgrade-498000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:55:28.182907   23258 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:55:28.183214   23258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:55:28.183220   23258 out.go:358] Setting ErrFile to fd 2...
	I1105 10:55:28.183223   23258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:55:28.183408   23258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:55:28.185152   23258 out.go:352] Setting JSON to false
	I1105 10:55:28.215191   23258 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10497,"bootTime":1730822431,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:55:28.215296   23258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:55:28.271144   23258 out.go:177] * [kubernetes-upgrade-498000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:55:28.312237   23258 notify.go:220] Checking for updates...
	I1105 10:55:28.337219   23258 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:55:28.438260   23258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:55:28.475128   23258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:55:28.494959   23258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:55:28.527139   23258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:55:28.548056   23258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:55:28.569555   23258 config.go:182] Loaded profile config "cert-expiration-488000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:55:28.569660   23258 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:55:28.602188   23258 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 10:55:28.644110   23258 start.go:297] selected driver: hyperkit
	I1105 10:55:28.644122   23258 start.go:901] validating driver "hyperkit" against <nil>
	I1105 10:55:28.644132   23258 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:55:28.650457   23258 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:55:28.650599   23258 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 10:55:28.661800   23258 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 10:55:28.668695   23258 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:55:28.668721   23258 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 10:55:28.668754   23258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 10:55:28.669008   23258 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 10:55:28.669040   23258 cni.go:84] Creating CNI manager for ""
	I1105 10:55:28.669081   23258 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1105 10:55:28.669132   23258 start.go:340] cluster config:
	{Name:kubernetes-upgrade-498000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:55:28.669226   23258 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 10:55:28.710925   23258 out.go:177] * Starting "kubernetes-upgrade-498000" primary control-plane node in "kubernetes-upgrade-498000" cluster
	I1105 10:55:28.732138   23258 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1105 10:55:28.732164   23258 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1105 10:55:28.732179   23258 cache.go:56] Caching tarball of preloaded images
	I1105 10:55:28.732281   23258 preload.go:172] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 10:55:28.732290   23258 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1105 10:55:28.732353   23258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/config.json ...
	I1105 10:55:28.732372   23258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/config.json: {Name:mk5116503ba73c08525d25c5bbdcaf10d64a6ce4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 10:55:28.733213   23258 start.go:360] acquireMachinesLock for kubernetes-upgrade-498000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 10:55:28.733295   23258 start.go:364] duration metric: took 68.969µs to acquireMachinesLock for "kubernetes-upgrade-498000"
	I1105 10:55:28.733317   23258 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 10:55:28.733351   23258 start.go:125] createHost starting for "" (driver="hyperkit")
	I1105 10:55:28.755046   23258 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 10:55:28.755191   23258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:55:28.755222   23258 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:55:28.766136   23258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61004
	I1105 10:55:28.766473   23258 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:55:28.766872   23258 main.go:141] libmachine: Using API Version  1
	I1105 10:55:28.766881   23258 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:55:28.767116   23258 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:55:28.767224   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetMachineName
	I1105 10:55:28.767318   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:28.767430   23258 start.go:159] libmachine.API.Create for "kubernetes-upgrade-498000" (driver="hyperkit")
	I1105 10:55:28.767453   23258 client.go:168] LocalClient.Create starting
	I1105 10:55:28.767487   23258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 10:55:28.767546   23258 main.go:141] libmachine: Decoding PEM data...
	I1105 10:55:28.767561   23258 main.go:141] libmachine: Parsing certificate...
	I1105 10:55:28.767617   23258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 10:55:28.767663   23258 main.go:141] libmachine: Decoding PEM data...
	I1105 10:55:28.767678   23258 main.go:141] libmachine: Parsing certificate...
	I1105 10:55:28.767692   23258 main.go:141] libmachine: Running pre-create checks...
	I1105 10:55:28.767699   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .PreCreateCheck
	I1105 10:55:28.767768   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:28.767929   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetConfigRaw
	I1105 10:55:28.776422   23258 main.go:141] libmachine: Creating machine...
	I1105 10:55:28.776431   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Create
	I1105 10:55:28.776524   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:28.777352   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | I1105 10:55:28.776519   23266 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:55:28.777448   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso...
	I1105 10:55:28.974491   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | I1105 10:55:28.974374   23266 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa...
	I1105 10:55:29.221824   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | I1105 10:55:29.221757   23266 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/kubernetes-upgrade-498000.rawdisk...
	I1105 10:55:29.221841   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Writing magic tar header
	I1105 10:55:29.221851   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Writing SSH key tar header
	I1105 10:55:29.222452   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | I1105 10:55:29.222396   23266 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000 ...
	I1105 10:55:29.605301   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:29.605319   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/hyperkit.pid
	I1105 10:55:29.605419   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Using UUID b961ae59-0a19-4238-822d-4bd7795e3c6b
	I1105 10:55:29.630911   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Generated MAC 96:62:b6:b6:75:db
	I1105 10:55:29.630933   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-498000
	I1105 10:55:29.630962   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b961ae59-0a19-4238-822d-4bd7795e3c6b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:55:29.630996   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b961ae59-0a19-4238-822d-4bd7795e3c6b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e41e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1105 10:55:29.631032   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b961ae59-0a19-4238-822d-4bd7795e3c6b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/kubernetes-upgrade-498000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machi
nes/kubernetes-upgrade-498000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-498000"}
	I1105 10:55:29.631064   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b961ae59-0a19-4238-822d-4bd7795e3c6b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/kubernetes-upgrade-498000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/bzimage,/Users/jenkins/minikube-
integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kubernetes-upgrade-498000"
	I1105 10:55:29.631071   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 10:55:29.634135   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 DEBUG: hyperkit: Pid is 23267
	I1105 10:55:29.634718   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 0
	I1105 10:55:29.634734   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:29.634834   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:29.635914   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:29.636066   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1105 10:55:29.636084   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 10:55:29.636101   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:55:29.636114   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:55:29.636125   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:55:29.636137   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:55:29.636155   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:55:29.636167   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:55:29.636177   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:55:29.636188   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:55:29.636198   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:55:29.636206   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:55:29.636249   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:55:29.636270   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:55:29.636278   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:55:29.636285   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:55:29.636293   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:55:29.636302   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:55:29.636309   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:55:29.636317   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:55:29.636325   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:55:29.652605   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 10:55:29.675021   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 10:55:29.676106   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:55:29.676120   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:55:29.676140   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:55:29.676151   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:55:30.072369   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 10:55:30.072387   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 10:55:30.187095   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 10:55:30.187113   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 10:55:30.187123   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 10:55:30.187135   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 10:55:30.187995   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 10:55:30.188005   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 10:55:31.636693   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 1
	I1105 10:55:31.636706   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:31.636811   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:31.637876   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:31.637968   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1105 10:55:31.637977   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 10:55:31.638025   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:55:31.638042   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:55:31.638050   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:55:31.638056   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:55:31.638070   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:55:31.638088   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:55:31.638096   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:55:31.638103   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:55:31.638110   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:55:31.638128   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:55:31.638139   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:55:31.638149   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:55:31.638157   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:55:31.638164   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:55:31.638174   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:55:31.638183   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:55:31.638191   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:55:31.638198   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:55:31.638205   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:55:33.638590   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 2
	I1105 10:55:33.638614   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:33.638673   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:33.639799   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:33.639884   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1105 10:55:33.639893   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 10:55:33.639901   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:55:33.639907   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:55:33.639913   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:55:33.639918   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:55:33.639927   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:55:33.639935   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:55:33.639947   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:55:33.639956   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:55:33.639972   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:55:33.639984   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:55:33.639991   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:55:33.639996   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:55:33.640009   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:55:33.640020   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:55:33.640026   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:55:33.640034   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:55:33.640042   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:55:33.640049   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:55:33.640058   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:55:35.640815   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 3
	I1105 10:55:35.640828   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:35.640911   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:35.641870   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:35.641966   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1105 10:55:35.641976   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 10:55:35.641985   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:55:35.641994   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:55:35.642011   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:55:35.642026   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:55:35.642040   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:55:35.642053   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:55:35.642067   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:55:35.642076   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:55:35.642082   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:55:35.642088   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:55:35.642103   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:55:35.642116   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:55:35.642125   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:55:35.642131   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:55:35.642149   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:55:35.642161   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:55:35.642172   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:55:35.642180   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:55:35.642189   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:55:35.933404   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 10:55:35.933437   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 10:55:35.933449   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 10:55:35.957046   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | 2024/11/05 10:55:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I1105 10:55:37.643206   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 4
	I1105 10:55:37.643240   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:37.643317   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:37.644272   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:37.644396   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1105 10:55:37.644411   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 10:55:37.644424   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 10:55:37.644437   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 10:55:37.644450   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 10:55:37.644461   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 10:55:37.644477   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 10:55:37.644487   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 10:55:37.644497   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 10:55:37.644507   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 10:55:37.644517   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 10:55:37.644528   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 10:55:37.644538   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 10:55:37.644551   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 10:55:37.644590   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 10:55:37.644604   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 10:55:37.644613   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 10:55:37.644620   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 10:55:37.644627   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 10:55:37.644632   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 10:55:37.644647   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 10:55:39.646535   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Attempt 5
	I1105 10:55:39.646565   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:39.646577   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:39.647580   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Searching for 96:62:b6:b6:75:db in /var/db/dhcpd_leases ...
	I1105 10:55:39.647649   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found 21 entries in /var/db/dhcpd_leases!
	I1105 10:55:39.647659   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:96:62:b6:b6:75:db ID:1,96:62:b6:b6:75:db Lease:0x672a783a}
	I1105 10:55:39.647666   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Found match: 96:62:b6:b6:75:db
	I1105 10:55:39.647670   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | IP: 192.169.0.22
	I1105 10:55:39.647716   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetConfigRaw
	I1105 10:55:39.648313   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:39.648427   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:39.648539   23258 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 10:55:39.648548   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetState
	I1105 10:55:39.648668   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:55:39.648719   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 23267
	I1105 10:55:39.649619   23258 main.go:141] libmachine: Detecting operating system of created instance...
	I1105 10:55:39.649630   23258 main.go:141] libmachine: Waiting for SSH to be available...
	I1105 10:55:39.649643   23258 main.go:141] libmachine: Getting to WaitForSSH function...
	I1105 10:55:39.649656   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:39.649756   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:39.649862   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:39.649957   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:39.650069   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:39.650415   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:39.650602   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:39.650610   23258 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1105 10:55:40.704121   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:55:40.704133   23258 main.go:141] libmachine: Detecting the provisioner...
	I1105 10:55:40.704139   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:40.704273   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:40.704371   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.704458   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.704551   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:40.704707   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:40.704851   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:40.704858   23258 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 10:55:40.756803   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1105 10:55:40.756848   23258 main.go:141] libmachine: found compatible host: buildroot
	I1105 10:55:40.756854   23258 main.go:141] libmachine: Provisioning with buildroot...
	I1105 10:55:40.756859   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetMachineName
	I1105 10:55:40.756995   23258 buildroot.go:166] provisioning hostname "kubernetes-upgrade-498000"
	I1105 10:55:40.757007   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetMachineName
	I1105 10:55:40.757114   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:40.757206   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:40.757299   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.757390   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.757479   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:40.757623   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:40.757758   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:40.757767   23258 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-498000 && echo "kubernetes-upgrade-498000" | sudo tee /etc/hostname
	I1105 10:55:40.821481   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-498000
	
	I1105 10:55:40.821503   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:40.821632   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:40.821729   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.821825   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:40.821913   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:40.822069   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:40.822202   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:40.822214   23258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-498000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-498000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-498000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 10:55:40.881128   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 10:55:40.881148   23258 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 10:55:40.881161   23258 buildroot.go:174] setting up certificates
	I1105 10:55:40.881173   23258 provision.go:84] configureAuth start
	I1105 10:55:40.881180   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetMachineName
	I1105 10:55:40.881314   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetIP
	I1105 10:55:40.881429   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:40.881548   23258 provision.go:143] copyHostCerts
	I1105 10:55:40.881641   23258 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 10:55:40.881647   23258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 10:55:40.881814   23258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 10:55:40.882050   23258 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 10:55:40.882056   23258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 10:55:40.882811   23258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 10:55:40.883073   23258 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 10:55:40.883079   23258 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 10:55:40.883186   23258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 10:55:40.883345   23258 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-498000 san=[127.0.0.1 192.169.0.22 kubernetes-upgrade-498000 localhost minikube]
	I1105 10:55:41.099329   23258 provision.go:177] copyRemoteCerts
	I1105 10:55:41.099403   23258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 10:55:41.099420   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:41.099568   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:41.099671   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.099783   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:41.099876   23258 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 10:55:41.132830   23258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 10:55:41.152967   23258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1105 10:55:41.172902   23258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 10:55:41.192746   23258 provision.go:87] duration metric: took 311.551066ms to configureAuth
	I1105 10:55:41.192759   23258 buildroot.go:189] setting minikube options for container-runtime
	I1105 10:55:41.192885   23258 config.go:182] Loaded profile config "kubernetes-upgrade-498000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1105 10:55:41.192904   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:41.193041   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:41.193133   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:41.193227   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.193306   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.193388   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:41.193525   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:41.193655   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:41.193663   23258 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 10:55:41.246426   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 10:55:41.246441   23258 buildroot.go:70] root file system type: tmpfs
	I1105 10:55:41.246517   23258 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 10:55:41.246531   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:41.246665   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:41.246763   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.246844   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.246944   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:41.247094   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:41.247233   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:41.247276   23258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 10:55:41.310904   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 10:55:41.310923   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:41.311064   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:41.311166   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.311292   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:41.311401   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:41.311568   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:41.311710   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:41.311722   23258 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 10:55:42.817157   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 10:55:42.817185   23258 main.go:141] libmachine: Checking connection to Docker...
	I1105 10:55:42.817196   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetURL
	I1105 10:55:42.817368   23258 main.go:141] libmachine: Docker is up and running!
	I1105 10:55:42.817375   23258 main.go:141] libmachine: Reticulating splines...
	I1105 10:55:42.817380   23258 client.go:171] duration metric: took 14.049594005s to LocalClient.Create
	I1105 10:55:42.817392   23258 start.go:167] duration metric: took 14.049634638s to libmachine.API.Create "kubernetes-upgrade-498000"
	I1105 10:55:42.817409   23258 start.go:293] postStartSetup for "kubernetes-upgrade-498000" (driver="hyperkit")
	I1105 10:55:42.817416   23258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 10:55:42.817426   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:42.817661   23258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 10:55:42.817674   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:42.817802   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:42.817898   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:42.818028   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:42.818145   23258 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 10:55:42.852695   23258 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 10:55:42.857635   23258 info.go:137] Remote host: Buildroot 2023.02.9
	I1105 10:55:42.857660   23258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 10:55:42.857775   23258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 10:55:42.858257   23258 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 10:55:42.858529   23258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 10:55:42.870259   23258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 10:55:42.906479   23258 start.go:296] duration metric: took 89.05925ms for postStartSetup
	I1105 10:55:42.906514   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetConfigRaw
	I1105 10:55:42.907188   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetIP
	I1105 10:55:42.907360   23258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/config.json ...
	I1105 10:55:42.907746   23258 start.go:128] duration metric: took 14.174053951s to createHost
	I1105 10:55:42.907762   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:42.907871   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:42.907990   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:42.908085   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:42.908175   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:42.908309   23258 main.go:141] libmachine: Using SSH client type: native
	I1105 10:55:42.908428   23258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa915620] 0xa918300 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I1105 10:55:42.908435   23258 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1105 10:55:42.961461   23258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1730832942.951332610
	
	I1105 10:55:42.961474   23258 fix.go:216] guest clock: 1730832942.951332610
	I1105 10:55:42.961479   23258 fix.go:229] Guest: 2024-11-05 10:55:42.95133261 -0800 PST Remote: 2024-11-05 10:55:42.907754 -0800 PST m=+14.769359078 (delta=43.57861ms)
	I1105 10:55:42.961495   23258 fix.go:200] guest clock delta is within tolerance: 43.57861ms
	I1105 10:55:42.961500   23258 start.go:83] releasing machines lock for "kubernetes-upgrade-498000", held for 14.227866305s
	I1105 10:55:42.961517   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:42.961669   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetIP
	I1105 10:55:42.961787   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:42.962138   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:42.962266   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 10:55:42.962439   23258 ssh_runner.go:195] Run: cat /version.json
	I1105 10:55:42.962451   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:42.962533   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:42.962619   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:42.962718   23258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 10:55:42.962719   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:42.962747   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 10:55:42.962808   23258 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 10:55:42.962842   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 10:55:42.962923   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 10:55:42.963020   23258 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 10:55:42.963125   23258 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 10:55:42.991636   23258 ssh_runner.go:195] Run: systemctl --version
	I1105 10:55:43.040836   23258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1105 10:55:43.045319   23258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1105 10:55:43.045377   23258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1105 10:55:43.053449   23258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1105 10:55:43.066906   23258 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1105 10:55:43.066924   23258 start.go:495] detecting cgroup driver to use...
	I1105 10:55:43.067032   23258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:55:43.090186   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1105 10:55:43.099790   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1105 10:55:43.108872   23258 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1105 10:55:43.108930   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1105 10:55:43.118119   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:55:43.127033   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1105 10:55:43.136077   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1105 10:55:43.145008   23258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 10:55:43.154080   23258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1105 10:55:43.163094   23258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 10:55:43.171110   23258 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1105 10:55:43.171161   23258 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1105 10:55:43.180332   23258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 10:55:43.188724   23258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:55:43.288260   23258 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1105 10:55:43.308381   23258 start.go:495] detecting cgroup driver to use...
	I1105 10:55:43.308503   23258 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1105 10:55:43.323435   23258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:55:43.341677   23258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 10:55:43.363207   23258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 10:55:43.374820   23258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:55:43.385067   23258 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1105 10:55:43.404564   23258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1105 10:55:43.414888   23258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 10:55:43.430308   23258 ssh_runner.go:195] Run: which cri-dockerd
	I1105 10:55:43.433094   23258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1105 10:55:43.440469   23258 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1105 10:55:43.453760   23258 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1105 10:55:43.553388   23258 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1105 10:55:43.653632   23258 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1105 10:55:43.653704   23258 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1105 10:55:43.669062   23258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 10:55:43.761389   23258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1105 10:56:44.621874   23258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.859050448s)
	I1105 10:56:44.621962   23258 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1105 10:56:44.656340   23258 out.go:201] 
	W1105 10:56:44.677039   23258 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:55:41 kubernetes-upgrade-498000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.561318532Z" level=info msg="Starting up"
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.561887747Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.562396716Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.580340291Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595828148Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595874792Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595917025Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595928082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595982032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596013201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596145948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596181190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596193535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596201649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596257393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596407734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598033812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598072759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598722017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598781258Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598934139Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.599026629Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602094070Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602166516Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602206246Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602241144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602275072Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602371763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602550455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602718366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602760655Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602794649Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602827652Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602859299Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602891051Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602924639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602958038Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603155733Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603201912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603233591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603271230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603308052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603350115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603386541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603420116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603454356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603485305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603516127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603549692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603587239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603620358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603651876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603685240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603720983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603757508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603789054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603819417Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603875496Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603915390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603946749Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603980208Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604063830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604121040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604157817Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604294191Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604353981Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604407882Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604448865Z" level=info msg="containerd successfully booted in 0.024849s"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.589141849Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.598236676Z" level=info msg="Loading containers: start."
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.677777682Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.767695445Z" level=info msg="Loading containers: done."
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.776944644Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777010099Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777095138Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777751045Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.801746035Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:55:42 kubernetes-upgrade-498000 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.802692957Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.784801084Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785709251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785798679Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785853828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785865691Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:55:43 kubernetes-upgrade-498000 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:55:44 kubernetes-upgrade-498000 dockerd[992]: time="2024-11-05T18:55:44.819378985Z" level=info msg="Starting up"
	Nov 05 18:56:44 kubernetes-upgrade-498000 dockerd[992]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 18:55:41 kubernetes-upgrade-498000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.561318532Z" level=info msg="Starting up"
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.561887747Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:41.562396716Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.580340291Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595828148Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595874792Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595917025Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595928082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.595982032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596013201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596145948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596181190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596193535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596201649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596257393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.596407734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598033812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598072759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598722017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598781258Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.598934139Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.599026629Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602094070Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602166516Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602206246Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602241144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602275072Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602371763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602550455Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602718366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602760655Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602794649Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602827652Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602859299Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602891051Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602924639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.602958038Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603155733Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603201912Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603233591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603271230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603308052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603350115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603386541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603420116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603454356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603485305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603516127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603549692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603587239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603620358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603651876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603685240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603720983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603757508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603789054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603819417Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603875496Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603915390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603946749Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.603980208Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604063830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604121040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604157817Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604294191Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604353981Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604407882Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 18:55:41 kubernetes-upgrade-498000 dockerd[517]: time="2024-11-05T18:55:41.604448865Z" level=info msg="containerd successfully booted in 0.024849s"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.589141849Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.598236676Z" level=info msg="Loading containers: start."
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.677777682Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.767695445Z" level=info msg="Loading containers: done."
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.776944644Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777010099Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777095138Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.777751045Z" level=info msg="Daemon has completed initialization"
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.801746035Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 18:55:42 kubernetes-upgrade-498000 systemd[1]: Started Docker Application Container Engine.
	Nov 05 18:55:42 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:42.802692957Z" level=info msg="API listen on [::]:2376"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.784801084Z" level=info msg="Processing signal 'terminated'"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785709251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785798679Z" level=info msg="Daemon shutdown complete"
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785853828Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 18:55:43 kubernetes-upgrade-498000 dockerd[511]: time="2024-11-05T18:55:43.785865691Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 18:55:43 kubernetes-upgrade-498000 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 18:55:44 kubernetes-upgrade-498000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 18:55:44 kubernetes-upgrade-498000 dockerd[992]: time="2024-11-05T18:55:44.819378985Z" level=info msg="Starting up"
	Nov 05 18:56:44 kubernetes-upgrade-498000 dockerd[992]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 18:56:44 kubernetes-upgrade-498000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1105 10:56:44.677092   23258 out.go:270] * 
	* 
	W1105 10:56:44.677728   23258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1105 10:56:44.740144   23258 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-498000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-498000: (8.390931771s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-498000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-498000 status --format={{.Host}}: exit status 7 (83.257674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1105 10:57:54.395743   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:59:34.181465   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:00:59.167664   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:01:31.312576   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:02:22.244396   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:04:34.187150   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:05:59.173739   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:06:31.319363   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (10m4.717908739s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-498000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (501.991622ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-498000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-498000
	    minikube start -p kubernetes-upgrade-498000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4980002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-498000 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit 
E1105 11:09:34.246895   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:10:57.328943   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:10:59.233043   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:11:31.377671   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:14:34.255706   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:14:34.475703   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:15:59.240933   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
E1105 11:16:31.387870   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-498000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=hyperkit : (10m42.103737189s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-11-05 11:17:40.741233 -0800 PST m=+5837.162053622
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-498000 -n kubernetes-upgrade-498000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-498000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-498000 logs -n 25: (3.427536445s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-892000          | force-systemd-flag-892000 | jenkins | v1.34.0 | 05 Nov 24 10:44 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-817000              | force-systemd-env-817000  | jenkins | v1.34.0 | 05 Nov 24 10:45 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-817000           | force-systemd-env-817000  | jenkins | v1.34.0 | 05 Nov 24 10:45 PST | 05 Nov 24 10:45 PST |
	| start   | -p docker-flags-536000                | docker-flags-536000       | jenkins | v1.34.0 | 05 Nov 24 10:45 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-892000             | force-systemd-flag-892000 | jenkins | v1.34.0 | 05 Nov 24 10:48 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-892000          | force-systemd-flag-892000 | jenkins | v1.34.0 | 05 Nov 24 10:48 PST | 05 Nov 24 10:48 PST |
	| start   | -p cert-expiration-488000             | cert-expiration-488000    | jenkins | v1.34.0 | 05 Nov 24 10:48 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-536000 ssh               | docker-flags-536000       | jenkins | v1.34.0 | 05 Nov 24 10:49 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-536000 ssh               | docker-flags-536000       | jenkins | v1.34.0 | 05 Nov 24 10:49 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-536000                | docker-flags-536000       | jenkins | v1.34.0 | 05 Nov 24 10:49 PST | 05 Nov 24 10:49 PST |
	| start   | -p cert-options-316000                | cert-options-316000       | jenkins | v1.34.0 | 05 Nov 24 10:49 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | cert-options-316000 ssh               | cert-options-316000       | jenkins | v1.34.0 | 05 Nov 24 10:53 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-316000 -- sudo        | cert-options-316000       | jenkins | v1.34.0 | 05 Nov 24 10:53 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-316000                | cert-options-316000       | jenkins | v1.34.0 | 05 Nov 24 10:53 PST | 05 Nov 24 10:53 PST |
	| start   | -p running-upgrade-379000             | minikube                  | jenkins | v1.26.0 | 05 Nov 24 10:54 PST | 05 Nov 24 10:54 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=hyperkit                  |                           |         |         |                     |                     |
	| start   | -p running-upgrade-379000             | running-upgrade-379000    | jenkins | v1.34.0 | 05 Nov 24 10:54 PST | 05 Nov 24 10:55 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-379000             | running-upgrade-379000    | jenkins | v1.34.0 | 05 Nov 24 10:55 PST | 05 Nov 24 10:55 PST |
	| start   | -p kubernetes-upgrade-498000          | kubernetes-upgrade-498000 | jenkins | v1.34.0 | 05 Nov 24 10:55 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-488000             | cert-expiration-488000    | jenkins | v1.34.0 | 05 Nov 24 10:55 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-498000          | kubernetes-upgrade-498000 | jenkins | v1.34.0 | 05 Nov 24 10:56 PST | 05 Nov 24 10:56 PST |
	| start   | -p kubernetes-upgrade-498000          | kubernetes-upgrade-498000 | jenkins | v1.34.0 | 05 Nov 24 10:56 PST | 05 Nov 24 11:06 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-498000          | kubernetes-upgrade-498000 | jenkins | v1.34.0 | 05 Nov 24 11:06 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-498000          | kubernetes-upgrade-498000 | jenkins | v1.34.0 | 05 Nov 24 11:06 PST | 05 Nov 24 11:17 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-488000             | cert-expiration-488000    | jenkins | v1.34.0 | 05 Nov 24 11:17 PST | 05 Nov 24 11:17 PST |
	| start   | -p stopped-upgrade-588000             | minikube                  | jenkins | v1.26.0 | 05 Nov 24 11:17 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=hyperkit                  |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 11:17:25
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 11:17:25.031657   24660 out.go:296] Setting OutFile to fd 1 ...
	I1105 11:17:25.031912   24660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1105 11:17:25.031915   24660 out.go:309] Setting ErrFile to fd 2...
	I1105 11:17:25.031918   24660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1105 11:17:25.032256   24660 root.go:329] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 11:17:25.032612   24660 out.go:303] Setting JSON to false
	I1105 11:17:25.060663   24660 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":11814,"bootTime":1730822431,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 11:17:25.060748   24660 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I1105 11:17:25.083455   24660 out.go:177] * [stopped-upgrade-588000] minikube v1.26.0 on Darwin 15.0.1
	I1105 11:17:25.130569   24660 notify.go:193] Checking for updates...
	I1105 11:17:25.152404   24660 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 11:17:25.174277   24660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 11:17:25.195253   24660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 11:17:25.216616   24660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 11:17:25.238595   24660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 11:17:25.260414   24660 out.go:177]   - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1363126946
	I1105 11:17:25.281732   24660 config.go:178] Loaded profile config "kubernetes-upgrade-498000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 11:17:25.281783   24660 driver.go:360] Setting default libvirt URI to qemu:///system
	I1105 11:17:25.313449   24660 out.go:177] * Using the hyperkit driver based on user configuration
	I1105 11:17:25.355257   24660 start.go:284] selected driver: hyperkit
	I1105 11:17:25.355325   24660 start.go:805] validating driver "hyperkit" against <nil>
	I1105 11:17:25.355348   24660 start.go:816] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 11:17:25.362312   24660 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 11:17:25.362433   24660 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 11:17:25.373038   24660 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 11:17:25.379824   24660 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:25.379839   24660 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 11:17:25.379877   24660 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I1105 11:17:25.380026   24660 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 11:17:25.380050   24660 cni.go:95] Creating CNI manager for ""
	I1105 11:17:25.380058   24660 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1105 11:17:25.380067   24660 start_flags.go:310] config:
	{Name:stopped-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-588000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I1105 11:17:25.380168   24660 iso.go:128] acquiring lock: {Name:mk156115ead97870943ea66402d1e5ee66a99cd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 11:17:25.401381   24660 out.go:177] * Starting control plane node stopped-upgrade-588000 in cluster stopped-upgrade-588000
	I1105 11:17:25.422303   24660 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1105 11:17:25.422361   24660 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I1105 11:17:25.422384   24660 cache.go:57] Caching tarball of preloaded images
	I1105 11:17:25.422663   24660 preload.go:174] Found /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1105 11:17:25.422679   24660 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I1105 11:17:25.422823   24660 profile.go:148] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/stopped-upgrade-588000/config.json ...
	I1105 11:17:25.422851   24660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/stopped-upgrade-588000/config.json: {Name:mk111d670fa73f58774586241fa09f2f71bd2823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 11:17:25.423510   24660 cache.go:208] Successfully downloaded all kic artifacts
	I1105 11:17:25.423556   24660 start.go:352] acquiring machines lock for stopped-upgrade-588000: {Name:mk67e3fe9c26d68e6bc4121ccfd9f37c1a8d85cb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1105 11:17:25.423685   24660 start.go:356] acquired machines lock for "stopped-upgrade-588000" in 117.179µs
	I1105 11:17:25.423710   24660 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:s
topped-upgrade-588000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 11:17:25.423773   24660 start.go:131] createHost starting for "" (driver="hyperkit")
	I1105 11:17:25.466555   24660 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1105 11:17:25.466941   24660 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:25.466987   24660 main.go:134] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:25.478996   24660 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61490
	I1105 11:17:25.479320   24660 main.go:134] libmachine: () Calling .GetVersion
	I1105 11:17:25.479736   24660 main.go:134] libmachine: Using API Version  1
	I1105 11:17:25.479744   24660 main.go:134] libmachine: () Calling .SetConfigRaw
	I1105 11:17:25.479992   24660 main.go:134] libmachine: () Calling .GetMachineName
	I1105 11:17:25.480102   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetMachineName
	I1105 11:17:25.480211   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:25.480316   24660 start.go:165] libmachine.API.Create for "stopped-upgrade-588000" (driver="hyperkit")
	I1105 11:17:25.480340   24660 client.go:168] LocalClient.Create starting
	I1105 11:17:25.480374   24660 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem
	I1105 11:17:25.480439   24660 main.go:134] libmachine: Decoding PEM data...
	I1105 11:17:25.480452   24660 main.go:134] libmachine: Parsing certificate...
	I1105 11:17:25.480515   24660 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem
	I1105 11:17:25.480558   24660 main.go:134] libmachine: Decoding PEM data...
	I1105 11:17:25.480568   24660 main.go:134] libmachine: Parsing certificate...
	I1105 11:17:25.480580   24660 main.go:134] libmachine: Running pre-create checks...
	I1105 11:17:25.480586   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .PreCreateCheck
	I1105 11:17:25.480661   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:25.480840   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetConfigRaw
	I1105 11:17:25.481358   24660 main.go:134] libmachine: Creating machine...
	I1105 11:17:25.481364   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .Create
	I1105 11:17:25.481426   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:25.481585   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | I1105 11:17:25.481422   24668 common.go:144] Making disk image using store path: /Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 11:17:25.481635   24660 main.go:134] libmachine: (stopped-upgrade-588000) Downloading /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I1105 11:17:25.634493   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | I1105 11:17:25.634402   24668 common.go:151] Creating ssh key: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/id_rsa...
	I1105 11:17:25.781985   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | I1105 11:17:25.781895   24668 common.go:157] Creating raw disk image: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/stopped-upgrade-588000.rawdisk...
	I1105 11:17:25.781992   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Writing magic tar header
	I1105 11:17:25.782000   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Writing SSH key tar header
	I1105 11:17:25.782733   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | I1105 11:17:25.782668   24668 common.go:171] Fixing permissions on /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000 ...
	I1105 11:17:26.085261   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:26.085276   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/hyperkit.pid
	I1105 11:17:26.085315   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Using UUID 972a8284-9baa-11ef-bc66-149d997fca88
	I1105 11:17:26.110669   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Generated MAC 0e:d4:48:c0:eb:4b
	I1105 11:17:26.110692   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-588000
	I1105 11:17:26.110729   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"972a8284-9baa-11ef-bc66-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e61e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(ni
l), CmdLine:"", process:(*os.Process)(nil)}
	I1105 11:17:26.110754   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"972a8284-9baa-11ef-bc66-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e61e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(ni
l), CmdLine:"", process:(*os.Process)(nil)}
	I1105 11:17:26.110796   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "972a8284-9baa-11ef-bc66-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/stopped-upgrade-588000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-5
88000/bzimage,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-588000"}
	I1105 11:17:26.110831   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 972a8284-9baa-11ef-bc66-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/stopped-upgrade-588000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/tty,log=/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/console-ring -f kexec,/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/bzimage,/Users/jenkins/minikube-integration/19910-17277/
.minikube/machines/stopped-upgrade-588000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-588000"
	I1105 11:17:26.110846   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1105 11:17:26.113518   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 DEBUG: hyperkit: Pid is 24669
	I1105 11:17:26.114126   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Attempt 0
	I1105 11:17:26.114141   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:26.114187   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:26.115584   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Searching for 0e:d4:48:c0:eb:4b in /var/db/dhcpd_leases ...
	I1105 11:17:26.115732   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found 21 entries in /var/db/dhcpd_leases!
	I1105 11:17:26.115745   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:96:62:b6:b6:75:db ID:1,96:62:b6:b6:75:db Lease:0x672a7ac7}
	I1105 11:17:26.115750   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 11:17:26.115759   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 11:17:26.115764   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 11:17:26.115771   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 11:17:26.115776   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 11:17:26.115793   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 11:17:26.115802   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 11:17:26.115816   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 11:17:26.115822   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 11:17:26.115828   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 11:17:26.115835   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 11:17:26.115841   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 11:17:26.115846   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 11:17:26.115852   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 11:17:26.115859   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 11:17:26.115865   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 11:17:26.115876   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 11:17:26.115882   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 11:17:26.115892   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 11:17:26.115902   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 11:17:26.123797   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: Using fd 6 for I/O notifications
	I1105 11:17:26.133123   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1105 11:17:26.134198   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 11:17:26.134216   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 11:17:26.134228   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 11:17:26.134242   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 11:17:26.496054   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1105 11:17:26.496070   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1105 11:17:26.600090   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1105 11:17:26.600106   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1105 11:17:26.600132   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1105 11:17:26.600142   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1105 11:17:26.600979   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1105 11:17:26.600986   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1105 11:17:28.116964   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Attempt 1
	I1105 11:17:28.116979   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:28.117040   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:28.118020   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Searching for 0e:d4:48:c0:eb:4b in /var/db/dhcpd_leases ...
	I1105 11:17:28.118105   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found 21 entries in /var/db/dhcpd_leases!
	I1105 11:17:28.118112   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:96:62:b6:b6:75:db ID:1,96:62:b6:b6:75:db Lease:0x672a7ac7}
	I1105 11:17:28.118120   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 11:17:28.118132   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 11:17:28.118137   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 11:17:28.118144   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 11:17:28.118149   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 11:17:28.118155   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 11:17:28.118163   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 11:17:28.118170   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 11:17:28.118175   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 11:17:28.118190   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 11:17:28.118198   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 11:17:28.118204   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 11:17:28.118210   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 11:17:28.118216   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 11:17:28.118225   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 11:17:28.118246   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 11:17:28.118256   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 11:17:28.118262   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 11:17:28.118267   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 11:17:28.118277   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 11:17:30.272656   24145 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.476074549s)
	I1105 11:17:30.272739   24145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1105 11:17:30.286505   24145 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1105 11:17:30.324529   24145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 11:17:30.341352   24145 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1105 11:17:30.445484   24145 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1105 11:17:30.557198   24145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 11:17:30.670558   24145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1105 11:17:30.687466   24145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1105 11:17:30.697711   24145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 11:17:30.800138   24145 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1105 11:17:30.869032   24145 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1105 11:17:30.870130   24145 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1105 11:17:30.874505   24145 start.go:563] Will wait 60s for crictl version
	I1105 11:17:30.874580   24145 ssh_runner.go:195] Run: which crictl
	I1105 11:17:30.877858   24145 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 11:17:30.903681   24145 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1105 11:17:30.903767   24145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 11:17:30.921104   24145 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1105 11:17:30.982110   24145 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.3.1 ...
	I1105 11:17:30.982139   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetIP
	I1105 11:17:30.982625   24145 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I1105 11:17:30.986108   24145 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:kubernetes-upgrade-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 11:17:30.986167   24145 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 11:17:30.986232   24145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 11:17:31.001402   24145 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1105 11:17:31.001422   24145 docker.go:619] Images already preloaded, skipping extraction
	I1105 11:17:31.001514   24145 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 11:17:31.015117   24145 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.2
	registry.k8s.io/kube-controller-manager:v1.31.2
	registry.k8s.io/kube-scheduler:v1.31.2
	registry.k8s.io/kube-proxy:v1.31.2
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1105 11:17:31.015145   24145 cache_images.go:84] Images are preloaded, skipping loading
	I1105 11:17:31.015159   24145 kubeadm.go:934] updating node { 192.169.0.22 8443 v1.31.2 docker true true} ...
	I1105 11:17:31.015257   24145 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-498000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 11:17:31.015346   24145 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1105 11:17:31.048309   24145 cni.go:84] Creating CNI manager for ""
	I1105 11:17:31.048328   24145 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 11:17:31.048340   24145 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 11:17:31.048359   24145 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.22 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-498000 NodeName:kubernetes-upgrade-498000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 11:17:31.048447   24145 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-498000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.169.0.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 11:17:31.048524   24145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 11:17:31.056331   24145 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 11:17:31.056399   24145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 11:17:31.064193   24145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1105 11:17:31.078025   24145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 11:17:31.092509   24145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I1105 11:17:31.107595   24145 ssh_runner.go:195] Run: grep 192.169.0.22	control-plane.minikube.internal$ /etc/hosts
	I1105 11:17:31.110949   24145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 11:17:31.218766   24145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 11:17:31.230628   24145 certs.go:68] Setting up /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000 for IP: 192.169.0.22
	I1105 11:17:31.230640   24145 certs.go:194] generating shared ca certs ...
	I1105 11:17:31.230650   24145 certs.go:226] acquiring lock for ca certs: {Name:mk71cfd5cfa6f19aa54770800e673e4533fb7d9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 11:17:31.230861   24145 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key
	I1105 11:17:31.230962   24145 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key
	I1105 11:17:31.230977   24145 certs.go:256] generating profile certs ...
	I1105 11:17:31.231105   24145 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/client.key
	I1105 11:17:31.231206   24145 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/apiserver.key.714ca9eb
	I1105 11:17:31.231309   24145 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/proxy-client.key
	I1105 11:17:31.231562   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem (1338 bytes)
	W1105 11:17:31.231617   24145 certs.go:480] ignoring /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842_empty.pem, impossibly tiny 0 bytes
	I1105 11:17:31.231626   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 11:17:31.231677   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem (1082 bytes)
	I1105 11:17:31.231712   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem (1123 bytes)
	I1105 11:17:31.231741   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem (1675 bytes)
	I1105 11:17:31.231849   24145 certs.go:484] found cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem (1708 bytes)
	I1105 11:17:31.232473   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 11:17:31.252752   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 11:17:31.272657   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 11:17:31.292852   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 11:17:31.313319   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1105 11:17:31.365477   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1105 11:17:31.414990   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 11:17:31.450940   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 11:17:31.487445   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /usr/share/ca-certificates/178422.pem (1708 bytes)
	I1105 11:17:31.522619   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 11:17:31.550760   24145 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/17842.pem --> /usr/share/ca-certificates/17842.pem (1338 bytes)
	I1105 11:17:31.584201   24145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 11:17:31.603515   24145 ssh_runner.go:195] Run: openssl version
	I1105 11:17:31.608564   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 11:17:31.630868   24145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 11:17:31.635851   24145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I1105 11:17:31.635927   24145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 11:17:31.649944   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 11:17:31.666027   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17842.pem && ln -fs /usr/share/ca-certificates/17842.pem /etc/ssl/certs/17842.pem"
	I1105 11:17:31.682058   24145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17842.pem
	I1105 11:17:31.685798   24145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:52 /usr/share/ca-certificates/17842.pem
	I1105 11:17:31.685873   24145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17842.pem
	I1105 11:17:31.691098   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17842.pem /etc/ssl/certs/51391683.0"
	I1105 11:17:31.701816   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178422.pem && ln -fs /usr/share/ca-certificates/178422.pem /etc/ssl/certs/178422.pem"
	I1105 11:17:31.715041   24145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178422.pem
	I1105 11:17:31.719699   24145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:52 /usr/share/ca-certificates/178422.pem
	I1105 11:17:31.719786   24145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178422.pem
	I1105 11:17:31.724762   24145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178422.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 11:17:31.741287   24145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 11:17:31.745198   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 11:17:31.749930   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 11:17:31.759083   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 11:17:31.767988   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 11:17:31.773341   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 11:17:31.784463   24145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 11:17:31.790917   24145 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-498000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:kubernetes-upgrade-498000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 11:17:31.791046   24145 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 11:17:31.852383   24145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 11:17:31.864892   24145 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 11:17:31.864907   24145 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 11:17:31.864979   24145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 11:17:31.875124   24145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 11:17:31.875551   24145 kubeconfig.go:125] found "kubernetes-upgrade-498000" server: "https://192.169.0.22:8443"
	I1105 11:17:31.876127   24145 kapi.go:59] client config for kubernetes-upgrade-498000: &rest.Config{Host:"https://192.169.0.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x498be20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 11:17:31.876691   24145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 11:17:31.890395   24145 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.22
	I1105 11:17:31.890427   24145 kubeadm.go:1160] stopping kube-system containers ...
	I1105 11:17:31.890521   24145 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1105 11:17:31.915977   24145 docker.go:483] Stopping containers: [83173e779165 7a9425f760fa 4cd388dc2268 7d3b873259fd d2fd217b9409 7b613e46eb5a 61c5bc86e03f a48c73485fe4 ad8feee15ae1 096a2a3b8596 efee1fb288c9 8b6c0717f24d a947d4a5e946 683fa5b18e68 6aa31a86f4e1 6db96c8a4c22 17ff75b66c53 61205f54ca98 d8376b9d8b65 85c67e6c6d4a cb8627772d63 3b2e44c1c572 32d138f26f32 4cc91e1b902d 27ebe37ed211 a779f7f50425]
	I1105 11:17:31.916083   24145 ssh_runner.go:195] Run: docker stop 83173e779165 7a9425f760fa 4cd388dc2268 7d3b873259fd d2fd217b9409 7b613e46eb5a 61c5bc86e03f a48c73485fe4 ad8feee15ae1 096a2a3b8596 efee1fb288c9 8b6c0717f24d a947d4a5e946 683fa5b18e68 6aa31a86f4e1 6db96c8a4c22 17ff75b66c53 61205f54ca98 d8376b9d8b65 85c67e6c6d4a cb8627772d63 3b2e44c1c572 32d138f26f32 4cc91e1b902d 27ebe37ed211 a779f7f50425
	I1105 11:17:32.265211   24145 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1105 11:17:32.307386   24145 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 11:17:32.315637   24145 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Nov  5 19:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  5 19:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Nov  5 19:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  5 19:06 /etc/kubernetes/scheduler.conf
	
	I1105 11:17:32.315732   24145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 11:17:32.323237   24145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 11:17:32.330624   24145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 11:17:32.338205   24145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1105 11:17:32.338275   24145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 11:17:32.345800   24145 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 11:17:32.353189   24145 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1105 11:17:32.353262   24145 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 11:17:32.360875   24145 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 11:17:32.368471   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:32.408009   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:33.491748   24145 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.083686358s)
	I1105 11:17:33.491765   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:33.654991   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:30.118982   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Attempt 2
	I1105 11:17:30.118992   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:30.119080   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:30.120159   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Searching for 0e:d4:48:c0:eb:4b in /var/db/dhcpd_leases ...
	I1105 11:17:30.120268   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found 21 entries in /var/db/dhcpd_leases!
	I1105 11:17:30.120284   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:96:62:b6:b6:75:db ID:1,96:62:b6:b6:75:db Lease:0x672a7ac7}
	I1105 11:17:30.120298   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 11:17:30.120305   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 11:17:30.120311   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 11:17:30.120328   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 11:17:30.120335   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 11:17:30.120342   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 11:17:30.120347   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 11:17:30.120353   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 11:17:30.120358   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 11:17:30.120384   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 11:17:30.120392   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 11:17:30.120400   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 11:17:30.120405   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 11:17:30.120410   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 11:17:30.120419   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 11:17:30.120426   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 11:17:30.120431   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 11:17:30.120436   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 11:17:30.120442   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 11:17:30.120449   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 11:17:31.114931   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:31 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1105 11:17:31.115002   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:31 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1105 11:17:31.115008   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | 2024/11/05 11:17:31 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1105 11:17:32.122411   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Attempt 3
	I1105 11:17:32.122427   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:32.122508   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:32.123500   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Searching for 0e:d4:48:c0:eb:4b in /var/db/dhcpd_leases ...
	I1105 11:17:32.123618   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found 21 entries in /var/db/dhcpd_leases!
	I1105 11:17:32.123628   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:96:62:b6:b6:75:db ID:1,96:62:b6:b6:75:db Lease:0x672a7ac7}
	I1105 11:17:32.123637   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:4a:04:d8:a4:b0 ID:1,ae:4a:4:d8:a4:b0 Lease:0x672a77f6}
	I1105 11:17:32.123643   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:02:4c:13:f0:45:c6 ID:1,2:4c:13:f0:45:c6 Lease:0x672a7479}
	I1105 11:17:32.123668   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:06:f0:22:94:35:88 ID:1,6:f0:22:94:35:88 Lease:0x672a73b5}
	I1105 11:17:32.123683   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:92:fa:20:6f:47:57 ID:1,92:fa:20:6f:47:57 Lease:0x672a72ae}
	I1105 11:17:32.123696   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:8e:5b:cc:86:47:0a ID:1,8e:5b:cc:86:47:a Lease:0x672a641b}
	I1105 11:17:32.123707   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:86:f1:77:20:86:74 ID:1,86:f1:77:20:86:74 Lease:0x672a7284}
	I1105 11:17:32.123719   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:92:b9:36:22:64:fd ID:1,92:b9:36:22:64:fd Lease:0x672a7248}
	I1105 11:17:32.123728   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:d2:d7:e9:78:89:df ID:1,d2:d7:e9:78:89:df Lease:0x672a6fe7}
	I1105 11:17:32.123735   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:66:06:30:8f:2a ID:1,66:66:6:30:8f:2a Lease:0x672a6fc2}
	I1105 11:17:32.123740   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:9e:96:be:0f:ea:6f ID:1,9e:96:be:f:ea:6f Lease:0x672a6fb1}
	I1105 11:17:32.123745   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d2:c8:91:27:02:4e ID:1,d2:c8:91:27:2:4e Lease:0x672a6f5b}
	I1105 11:17:32.123758   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:02:30:6b:3f:bf:40 ID:1,2:30:6b:3f:bf:40 Lease:0x672a6f2e}
	I1105 11:17:32.123765   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:82:38:b3:b4:03:92 ID:1,82:38:b3:b4:3:92 Lease:0x672a6ec0}
	I1105 11:17:32.123784   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1a:a3:f2:a5:2e:39 ID:1,1a:a3:f2:a5:2e:39 Lease:0x672a6e6b}
	I1105 11:17:32.123791   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:06:83:5c:e9:cb:34 ID:1,6:83:5c:e9:cb:34 Lease:0x672a5fea}
	I1105 11:17:32.123800   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4a:4e:c6:49:69:60 ID:1,4a:4e:c6:49:69:60 Lease:0x672a6e32}
	I1105 11:17:32.123807   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:82:fc:3d:82:28:7c ID:1,82:fc:3d:82:28:7c Lease:0x672a6e1f}
	I1105 11:17:32.123812   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:0a:f2:13:1f:4d:a9 ID:1,a:f2:13:1f:4d:a9 Lease:0x672a6979}
	I1105 11:17:32.123817   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:36:6d:50:88:43 ID:1,42:36:6d:50:88:43 Lease:0x672a68b2}
	I1105 11:17:32.123825   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:8a:ae:5d:dc:69:d7 ID:1,8a:ae:5d:dc:69:d7 Lease:0x672a66c1}
	I1105 11:17:34.124635   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Attempt 4
	I1105 11:17:34.124647   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:34.124703   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:34.125702   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Searching for 0e:d4:48:c0:eb:4b in /var/db/dhcpd_leases ...
	I1105 11:17:34.125762   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I1105 11:17:34.125768   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:0e:d4:48:c0:eb:4b ID:1,e:d4:48:c0:eb:4b Lease:0x672a7d5d}
	I1105 11:17:34.125775   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | Found match: 0e:d4:48:c0:eb:4b
	I1105 11:17:34.125779   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | IP: 192.169.0.23
	I1105 11:17:34.125840   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetConfigRaw
	I1105 11:17:34.126479   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:34.126601   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:34.126697   24660 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1105 11:17:34.126708   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetState
	I1105 11:17:34.126798   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:34.126863   24660 main.go:134] libmachine: (stopped-upgrade-588000) DBG | hyperkit pid from json: 24669
	I1105 11:17:34.127784   24660 main.go:134] libmachine: Detecting operating system of created instance...
	I1105 11:17:34.127789   24660 main.go:134] libmachine: Waiting for SSH to be available...
	I1105 11:17:34.127793   24660 main.go:134] libmachine: Getting to WaitForSSH function...
	I1105 11:17:34.127799   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:34.127892   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:34.127994   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:34.128087   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:34.128164   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:34.128282   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:34.128445   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:34.128450   24660 main.go:134] libmachine: About to run SSH command:
	exit 0
	I1105 11:17:33.719591   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:33.781887   24145 api_server.go:52] waiting for apiserver process to appear ...
	I1105 11:17:33.781975   24145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 11:17:34.282174   24145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 11:17:34.782091   24145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 11:17:34.795765   24145 api_server.go:72] duration metric: took 1.01385213s to wait for apiserver process to appear ...
	I1105 11:17:34.795780   24145 api_server.go:88] waiting for apiserver healthz status ...
	I1105 11:17:34.795796   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:37.681855   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 11:17:37.681877   24145 api_server.go:103] status: https://192.169.0.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 11:17:37.681898   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:37.756476   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 11:17:37.756495   24145 api_server.go:103] status: https://192.169.0.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 11:17:37.797249   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:37.807828   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1105 11:17:37.807851   24145 api_server.go:103] status: https://192.169.0.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1105 11:17:38.297473   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:38.300929   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 11:17:38.300945   24145 api_server.go:103] status: https://192.169.0.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 11:17:38.797133   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:38.800404   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 200:
	ok
	I1105 11:17:38.805324   24145 api_server.go:141] control plane version: v1.31.2
	I1105 11:17:38.805341   24145 api_server.go:131] duration metric: took 4.009431854s to wait for apiserver health ...
	I1105 11:17:38.805347   24145 cni.go:84] Creating CNI manager for ""
	I1105 11:17:38.805360   24145 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 11:17:38.825601   24145 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1105 11:17:38.846573   24145 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1105 11:17:38.854244   24145 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1105 11:17:38.868794   24145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 11:17:38.868842   24145 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 11:17:38.868852   24145 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 11:17:38.876096   24145 system_pods.go:59] 8 kube-system pods found
	I1105 11:17:38.876116   24145 system_pods.go:61] "coredns-7c65d6cfc9-d6pvt" [e4bb9bc4-b09a-4f29-98c8-aa1860e15d14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 11:17:38.876124   24145 system_pods.go:61] "coredns-7c65d6cfc9-h89l5" [ff9c9b43-a44a-4463-9ee8-ffd227c192b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 11:17:38.876129   24145 system_pods.go:61] "etcd-kubernetes-upgrade-498000" [2453e004-26e8-4148-b52e-21b1aeb47a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 11:17:38.876138   24145 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-498000" [d5d995df-7ac3-49ce-86a6-d1e6bfbe55bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 11:17:38.876143   24145 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-498000" [7ed5683f-bf7e-42ad-b20d-02de8e7a1fca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 11:17:38.876150   24145 system_pods.go:61] "kube-proxy-fr96x" [e7ba4af2-b6ed-40e8-9290-cc14aaa831a8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 11:17:38.876155   24145 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-498000" [0f97b662-2323-4cc6-a8ab-18e5436e303e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 11:17:38.876159   24145 system_pods.go:61] "storage-provisioner" [1f3bfba1-5ccd-4916-98f4-ca68037ae457] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 11:17:38.876164   24145 system_pods.go:74] duration metric: took 7.358334ms to wait for pod list to return data ...
	I1105 11:17:38.876171   24145 node_conditions.go:102] verifying NodePressure condition ...
	I1105 11:17:38.879397   24145 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 11:17:38.879415   24145 node_conditions.go:123] node cpu capacity is 2
	I1105 11:17:38.879427   24145 node_conditions.go:105] duration metric: took 3.25193ms to run NodePressure ...
	I1105 11:17:38.879440   24145 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1105 11:17:39.160050   24145 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 11:17:39.169704   24145 ops.go:34] apiserver oom_adj: -16
	I1105 11:17:39.169717   24145 kubeadm.go:597] duration metric: took 7.304586025s to restartPrimaryControlPlane
	I1105 11:17:39.169724   24145 kubeadm.go:394] duration metric: took 7.378596646s to StartCluster
	I1105 11:17:39.169735   24145 settings.go:142] acquiring lock: {Name:mkb9db6c39cf305021d5d9ea8e7cd364fbed4154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 11:17:39.169841   24145 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 11:17:39.170325   24145 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19910-17277/kubeconfig: {Name:mk020782da2535e8a484bb28e080ca9961ae0c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 11:17:39.170594   24145 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1105 11:17:39.170623   24145 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 11:17:39.170694   24145 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-498000"
	I1105 11:17:39.170720   24145 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-498000"
	W1105 11:17:39.170729   24145 addons.go:243] addon storage-provisioner should already be in state true
	I1105 11:17:39.170766   24145 config.go:182] Loaded profile config "kubernetes-upgrade-498000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 11:17:39.170769   24145 host.go:66] Checking if "kubernetes-upgrade-498000" exists ...
	I1105 11:17:39.170783   24145 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-498000"
	I1105 11:17:39.170827   24145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-498000"
	I1105 11:17:39.171083   24145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:39.171112   24145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:39.171662   24145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:39.171843   24145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:39.186584   24145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61519
	I1105 11:17:39.186593   24145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61520
	I1105 11:17:39.186913   24145 main.go:141] libmachine: () Calling .GetVersion
	I1105 11:17:39.187026   24145 main.go:141] libmachine: () Calling .GetVersion
	I1105 11:17:39.187247   24145 main.go:141] libmachine: Using API Version  1
	I1105 11:17:39.187258   24145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 11:17:39.187363   24145 main.go:141] libmachine: Using API Version  1
	I1105 11:17:39.187375   24145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 11:17:39.187541   24145 main.go:141] libmachine: () Calling .GetMachineName
	I1105 11:17:39.187613   24145 main.go:141] libmachine: () Calling .GetMachineName
	I1105 11:17:39.187674   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetState
	I1105 11:17:39.187773   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:39.187860   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 24106
	I1105 11:17:39.188012   24145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:39.188041   24145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:39.189891   24145 kapi.go:59] client config for kubernetes-upgrade-498000: &rest.Config{Host:"https://192.169.0.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/kubernetes-upgrade-498000/client.key", CAFile:"/Users/jenkins/minikube-integration/19910-17277/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x498be20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 11:17:39.190276   24145 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-498000"
	W1105 11:17:39.190285   24145 addons.go:243] addon default-storageclass should already be in state true
	I1105 11:17:39.190300   24145 host.go:66] Checking if "kubernetes-upgrade-498000" exists ...
	I1105 11:17:39.190538   24145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:39.190586   24145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:39.192712   24145 out.go:177] * Verifying Kubernetes components...
	I1105 11:17:39.199657   24145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61523
	I1105 11:17:39.201511   24145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61524
	I1105 11:17:39.215388   24145 main.go:141] libmachine: () Calling .GetVersion
	I1105 11:17:39.215418   24145 main.go:141] libmachine: () Calling .GetVersion
	I1105 11:17:39.215741   24145 main.go:141] libmachine: Using API Version  1
	I1105 11:17:39.215760   24145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 11:17:39.215828   24145 main.go:141] libmachine: Using API Version  1
	I1105 11:17:39.215849   24145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 11:17:39.216015   24145 main.go:141] libmachine: () Calling .GetMachineName
	I1105 11:17:39.216065   24145 main.go:141] libmachine: () Calling .GetMachineName
	I1105 11:17:39.216172   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetState
	I1105 11:17:39.216277   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:39.216354   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 24106
	I1105 11:17:39.216425   24145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 11:17:39.216457   24145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 11:17:39.218789   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 11:17:39.227927   24145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61527
	I1105 11:17:39.235347   24145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 11:17:39.235628   24145 main.go:141] libmachine: () Calling .GetVersion
	I1105 11:17:39.236037   24145 main.go:141] libmachine: Using API Version  1
	I1105 11:17:39.236055   24145 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 11:17:39.236287   24145 main.go:141] libmachine: () Calling .GetMachineName
	I1105 11:17:39.236401   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetState
	I1105 11:17:39.236502   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 11:17:39.236592   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | hyperkit pid from json: 24106
	I1105 11:17:39.237828   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .DriverName
	I1105 11:17:39.237981   24145 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 11:17:39.237988   24145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 11:17:39.237997   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 11:17:39.238080   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 11:17:39.238164   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 11:17:39.238258   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 11:17:39.238377   24145 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 11:17:39.256180   24145 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 11:17:35.203035   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1105 11:17:35.203048   24660 main.go:134] libmachine: Detecting the provisioner...
	I1105 11:17:35.203054   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.203200   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.203298   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.203398   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.203491   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.203640   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.203769   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.203774   24660 main.go:134] libmachine: About to run SSH command:
	cat /etc/os-release
	I1105 11:17:35.278276   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g14f2929-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1105 11:17:35.278347   24660 main.go:134] libmachine: found compatible host: buildroot
	I1105 11:17:35.278351   24660 main.go:134] libmachine: Provisioning with buildroot...
	I1105 11:17:35.278357   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetMachineName
	I1105 11:17:35.278527   24660 buildroot.go:166] provisioning hostname "stopped-upgrade-588000"
	I1105 11:17:35.278537   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetMachineName
	I1105 11:17:35.278661   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.278776   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.278858   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.278946   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.279063   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.279210   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.279342   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.279348   24660 main.go:134] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-588000 && echo "stopped-upgrade-588000" | sudo tee /etc/hostname
	I1105 11:17:35.361835   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-588000
	
	I1105 11:17:35.361848   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.362001   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.362098   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.362176   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.362262   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.362405   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.362526   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.362536   24660 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-588000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-588000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-588000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 11:17:35.442381   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1105 11:17:35.442396   24660 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19910-17277/.minikube CaCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19910-17277/.minikube}
	I1105 11:17:35.442412   24660 buildroot.go:174] setting up certificates
	I1105 11:17:35.442421   24660 provision.go:83] configureAuth start
	I1105 11:17:35.442426   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetMachineName
	I1105 11:17:35.442571   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetIP
	I1105 11:17:35.442662   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.442733   24660 provision.go:138] copyHostCerts
	I1105 11:17:35.442811   24660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem, removing ...
	I1105 11:17:35.442819   24660 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem
	I1105 11:17:35.442973   24660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/ca.pem (1082 bytes)
	I1105 11:17:35.443218   24660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem, removing ...
	I1105 11:17:35.443221   24660 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem
	I1105 11:17:35.443318   24660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/cert.pem (1123 bytes)
	I1105 11:17:35.443494   24660 exec_runner.go:144] found /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem, removing ...
	I1105 11:17:35.443497   24660 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem
	I1105 11:17:35.443582   24660 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19910-17277/.minikube/key.pem (1675 bytes)
	I1105 11:17:35.443741   24660 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-588000 san=[192.169.0.23 192.169.0.23 localhost 127.0.0.1 minikube stopped-upgrade-588000]
	I1105 11:17:35.616536   24660 provision.go:172] copyRemoteCerts
	I1105 11:17:35.616602   24660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 11:17:35.616625   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.616821   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.616949   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.617055   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.617154   24660 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/id_rsa Username:docker}
	I1105 11:17:35.659663   24660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 11:17:35.675927   24660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1105 11:17:35.692245   24660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 11:17:35.708208   24660 provision.go:86] duration metric: configureAuth took 265.766891ms
	I1105 11:17:35.708216   24660 buildroot.go:189] setting minikube options for container-runtime
	I1105 11:17:35.708346   24660 config.go:178] Loaded profile config "stopped-upgrade-588000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1105 11:17:35.708356   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:35.708507   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.708601   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.708679   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.708763   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.708841   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.709001   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.709112   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.709117   24660 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1105 11:17:35.783618   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1105 11:17:35.783627   24660 buildroot.go:70] root file system type: tmpfs
	I1105 11:17:35.783779   24660 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1105 11:17:35.783792   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.783930   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.784023   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.784117   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.784228   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.784395   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.784536   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.784580   24660 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1105 11:17:35.867176   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1105 11:17:35.867197   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:35.867346   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:35.867453   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.867541   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:35.867622   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:35.867763   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:35.867889   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:35.867899   24660 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1105 11:17:36.342630   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1105 11:17:36.342641   24660 main.go:134] libmachine: Checking connection to Docker...
	I1105 11:17:36.342652   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetURL
	I1105 11:17:36.342817   24660 main.go:134] libmachine: Docker is up and running!
	I1105 11:17:36.342822   24660 main.go:134] libmachine: Reticulating splines...
	I1105 11:17:36.342830   24660 client.go:171] LocalClient.Create took 10.862156247s
	I1105 11:17:36.342842   24660 start.go:173] duration metric: libmachine.API.Create for "stopped-upgrade-588000" took 10.862196523s
	I1105 11:17:36.342847   24660 start.go:306] post-start starting for "stopped-upgrade-588000" (driver="hyperkit")
	I1105 11:17:36.342851   24660 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 11:17:36.342859   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.343018   24660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 11:17:36.343028   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:36.343127   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:36.343220   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:36.343308   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:36.343390   24660 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/id_rsa Username:docker}
	I1105 11:17:36.386333   24660 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 11:17:36.388940   24660 info.go:137] Remote host: Buildroot 2021.02.12
	I1105 11:17:36.388949   24660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/addons for local assets ...
	I1105 11:17:36.389048   24660 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19910-17277/.minikube/files for local assets ...
	I1105 11:17:36.389238   24660 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem -> 178422.pem in /etc/ssl/certs
	I1105 11:17:36.389464   24660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 11:17:36.399192   24660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/ssl/certs/178422.pem --> /etc/ssl/certs/178422.pem (1708 bytes)
	I1105 11:17:36.427274   24660 start.go:309] post-start completed in 84.417593ms
	I1105 11:17:36.427304   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetConfigRaw
	I1105 11:17:36.427984   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetIP
	I1105 11:17:36.428133   24660 profile.go:148] Saving config to /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/stopped-upgrade-588000/config.json ...
	I1105 11:17:36.428718   24660 start.go:134] duration metric: createHost completed in 11.004609104s
	I1105 11:17:36.428732   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:36.428836   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:36.428918   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:36.429011   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:36.429087   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:36.429202   24660 main.go:134] libmachine: Using SSH client type: native
	I1105 11:17:36.429292   24660 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2d60] 0x13d5dc0 <nil>  [] 0s} 192.169.0.23 22 <nil> <nil>}
	I1105 11:17:36.429297   24660 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I1105 11:17:36.503666   24660 main.go:134] libmachine: SSH cmd err, output: <nil>: 1730834256.635956175
	
	I1105 11:17:36.503673   24660 fix.go:207] guest clock: 1730834256.635956175
	I1105 11:17:36.503679   24660 fix.go:220] Guest: 2024-11-05 11:17:36.635956175 -0800 PST Remote: 2024-11-05 11:17:36.428724 -0800 PST m=+11.447612481 (delta=207.232175ms)
	I1105 11:17:36.503695   24660 fix.go:191] guest clock delta is within tolerance: 207.232175ms
	I1105 11:17:36.503698   24660 start.go:81] releasing machines lock for "stopped-upgrade-588000", held for 11.079675824s
	I1105 11:17:36.503717   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.503882   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetIP
	I1105 11:17:36.503992   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.504103   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.504194   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.504523   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.504613   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .DriverName
	I1105 11:17:36.504686   24660 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1105 11:17:36.504711   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:36.504760   24660 ssh_runner.go:195] Run: systemctl --version
	I1105 11:17:36.504774   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHHostname
	I1105 11:17:36.504827   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:36.504878   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHPort
	I1105 11:17:36.504911   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:36.504998   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHKeyPath
	I1105 11:17:36.505016   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:36.505077   24660 main.go:134] libmachine: (stopped-upgrade-588000) Calling .GetSSHUsername
	I1105 11:17:36.505096   24660 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/id_rsa Username:docker}
	I1105 11:17:36.505156   24660 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/stopped-upgrade-588000/id_rsa Username:docker}
	I1105 11:17:36.549997   24660 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1105 11:17:36.550084   24660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1105 11:17:36.632671   24660 docker.go:602] Got preloaded images: 
	I1105 11:17:36.632679   24660 docker.go:608] k8s.gcr.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1105 11:17:36.632771   24660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1105 11:17:36.639842   24660 ssh_runner.go:195] Run: which lz4
	I1105 11:17:36.642306   24660 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1105 11:17:36.644988   24660 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1105 11:17:36.645003   24660 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (425543115 bytes)
	I1105 11:17:37.843596   24660 docker.go:567] Took 1.201284 seconds to copy over tarball
	I1105 11:17:37.843663   24660 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1105 11:17:39.277430   24145 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 11:17:39.277444   24145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 11:17:39.277460   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHHostname
	I1105 11:17:39.277628   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHPort
	I1105 11:17:39.277725   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHKeyPath
	I1105 11:17:39.277822   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .GetSSHUsername
	I1105 11:17:39.277921   24145 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/kubernetes-upgrade-498000/id_rsa Username:docker}
	I1105 11:17:39.359899   24145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 11:17:39.373795   24145 api_server.go:52] waiting for apiserver process to appear ...
	I1105 11:17:39.373885   24145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 11:17:39.390016   24145 api_server.go:72] duration metric: took 219.39615ms to wait for apiserver process to appear ...
	I1105 11:17:39.390032   24145 api_server.go:88] waiting for apiserver healthz status ...
	I1105 11:17:39.390045   24145 api_server.go:253] Checking apiserver healthz at https://192.169.0.22:8443/healthz ...
	I1105 11:17:39.393791   24145 api_server.go:279] https://192.169.0.22:8443/healthz returned 200:
	ok
	I1105 11:17:39.394337   24145 api_server.go:141] control plane version: v1.31.2
	I1105 11:17:39.394347   24145 api_server.go:131] duration metric: took 4.309742ms to wait for apiserver health ...
	I1105 11:17:39.394361   24145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 11:17:39.398144   24145 system_pods.go:59] 8 kube-system pods found
	I1105 11:17:39.398166   24145 system_pods.go:61] "coredns-7c65d6cfc9-d6pvt" [e4bb9bc4-b09a-4f29-98c8-aa1860e15d14] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 11:17:39.398175   24145 system_pods.go:61] "coredns-7c65d6cfc9-h89l5" [ff9c9b43-a44a-4463-9ee8-ffd227c192b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 11:17:39.398182   24145 system_pods.go:61] "etcd-kubernetes-upgrade-498000" [2453e004-26e8-4148-b52e-21b1aeb47a3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1105 11:17:39.398188   24145 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-498000" [d5d995df-7ac3-49ce-86a6-d1e6bfbe55bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 11:17:39.398195   24145 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-498000" [7ed5683f-bf7e-42ad-b20d-02de8e7a1fca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 11:17:39.398202   24145 system_pods.go:61] "kube-proxy-fr96x" [e7ba4af2-b6ed-40e8-9290-cc14aaa831a8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1105 11:17:39.398220   24145 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-498000" [0f97b662-2323-4cc6-a8ab-18e5436e303e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1105 11:17:39.398225   24145 system_pods.go:61] "storage-provisioner" [1f3bfba1-5ccd-4916-98f4-ca68037ae457] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1105 11:17:39.398234   24145 system_pods.go:74] duration metric: took 3.867715ms to wait for pod list to return data ...
	I1105 11:17:39.398241   24145 kubeadm.go:582] duration metric: took 227.623295ms to wait for: map[apiserver:true system_pods:true]
	I1105 11:17:39.398249   24145 node_conditions.go:102] verifying NodePressure condition ...
	I1105 11:17:39.400437   24145 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1105 11:17:39.400449   24145 node_conditions.go:123] node cpu capacity is 2
	I1105 11:17:39.400455   24145 node_conditions.go:105] duration metric: took 2.201925ms to run NodePressure ...
	I1105 11:17:39.400465   24145 start.go:241] waiting for startup goroutines ...
	I1105 11:17:39.404947   24145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 11:17:39.460268   24145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 11:17:40.478962   24145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.018644786s)
	I1105 11:17:40.478992   24145 main.go:141] libmachine: Making call to close driver server
	I1105 11:17:40.478998   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Close
	I1105 11:17:40.479204   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Closing plugin on server side
	I1105 11:17:40.479243   24145 main.go:141] libmachine: Successfully made call to close driver server
	I1105 11:17:40.479251   24145 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 11:17:40.479260   24145 main.go:141] libmachine: Making call to close driver server
	I1105 11:17:40.479267   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Close
	I1105 11:17:40.479392   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Closing plugin on server side
	I1105 11:17:40.479395   24145 main.go:141] libmachine: Successfully made call to close driver server
	I1105 11:17:40.479421   24145 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 11:17:40.480518   24145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075515374s)
	I1105 11:17:40.480537   24145 main.go:141] libmachine: Making call to close driver server
	I1105 11:17:40.480543   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Close
	I1105 11:17:40.480722   24145 main.go:141] libmachine: Successfully made call to close driver server
	I1105 11:17:40.480730   24145 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 11:17:40.480734   24145 main.go:141] libmachine: Making call to close driver server
	I1105 11:17:40.480738   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Close
	I1105 11:17:40.480743   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Closing plugin on server side
	I1105 11:17:40.480863   24145 main.go:141] libmachine: Successfully made call to close driver server
	I1105 11:17:40.480878   24145 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 11:17:40.489258   24145 main.go:141] libmachine: Making call to close driver server
	I1105 11:17:40.489271   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) Calling .Close
	I1105 11:17:40.489455   24145 main.go:141] libmachine: Successfully made call to close driver server
	I1105 11:17:40.489463   24145 main.go:141] libmachine: Making call to close connection to plugin binary
	I1105 11:17:40.489483   24145 main.go:141] libmachine: (kubernetes-upgrade-498000) DBG | Closing plugin on server side
	I1105 11:17:40.543049   24145 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1105 11:17:40.564967   24145 addons.go:510] duration metric: took 1.394294851s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1105 11:17:40.565009   24145 start.go:246] waiting for cluster config update ...
	I1105 11:17:40.565030   24145 start.go:255] writing updated cluster config ...
	I1105 11:17:40.566793   24145 ssh_runner.go:195] Run: rm -f paused
	I1105 11:17:40.631395   24145 start.go:600] kubectl: 1.30.2, cluster: 1.31.2 (minor skew: 1)
	I1105 11:17:40.653116   24145 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-498000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Nov 05 19:17:39 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:39.932172355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:39 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:39.946141977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 19:17:39 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:39.946325244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 19:17:39 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:39.946355708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:39 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:39.946496256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 cri-dockerd[5856]: time="2024-11-05T19:17:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1c3c33f553fae57f412b7317a4e91a25d04af686b2eb1662078dac1dc9ec3408/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 19:17:40 kubernetes-upgrade-498000 cri-dockerd[5856]: time="2024-11-05T19:17:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ce66db9c09c807881bf7f958812051ce5de9afd26ccc3e3c656601ac0863893/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.199832835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.200008857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.200046761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.200297274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.249426006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.249935664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.249945871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.250960792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 cri-dockerd[5856]: time="2024-11-05T19:17:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8072a62cbee5e796fd7165c68e1234faad8b1405a986d19931854bd32b2a1e10/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 19:17:40 kubernetes-upgrade-498000 cri-dockerd[5856]: time="2024-11-05T19:17:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c7457d5cb5b3e0aa1b51b12bad2657e18686b1399ac36c1344891fed8a55b4d0/resolv.conf as [nameserver 192.169.0.1]"
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.560751614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.560841264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.560910915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.561874695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.649875192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.650143628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.650204555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 05 19:17:40 kubernetes-upgrade-498000 dockerd[5591]: time="2024-11-05T19:17:40.650364010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	217c143e8f532       c69fa2e9cbf5f       1 second ago        Running             coredns                   1                   c7457d5cb5b3e       coredns-7c65d6cfc9-h89l5
	01e2693a737a1       c69fa2e9cbf5f       1 second ago        Running             coredns                   1                   8072a62cbee5e       coredns-7c65d6cfc9-d6pvt
	779bf240736c8       6e38f40d628db       1 second ago        Running             storage-provisioner       3                   0ce66db9c09c8       storage-provisioner
	740ffd1ce3147       505d571f5fd56       1 second ago        Running             kube-proxy                2                   1c3c33f553fae       kube-proxy-fr96x
	0c37f0aba723e       2e96e5913fc06       7 seconds ago       Running             etcd                      2                   fc3b8fffc7e0d       etcd-kubernetes-upgrade-498000
	04f07fa1f0001       0486b6c53a1b5       7 seconds ago       Running             kube-controller-manager   2                   05b03ef1c86f9       kube-controller-manager-kubernetes-upgrade-498000
	790445453e930       9499c9960544e       7 seconds ago       Running             kube-apiserver            2                   c73c82a5d3166       kube-apiserver-kubernetes-upgrade-498000
	ed0cace8da606       847c7bc1a5418       7 seconds ago       Running             kube-scheduler            2                   8c838db587727       kube-scheduler-kubernetes-upgrade-498000
	0db273535a5de       505d571f5fd56       9 seconds ago       Created             kube-proxy                1                   d2fd217b94091       kube-proxy-fr96x
	b3231458b403a       0486b6c53a1b5       9 seconds ago       Created             kube-controller-manager   1                   61c5bc86e03fd       kube-controller-manager-kubernetes-upgrade-498000
	b4a1d1bb41c65       6e38f40d628db       9 seconds ago       Created             storage-provisioner       2                   4cd388dc2268a       storage-provisioner
	6c45e2c8c11db       847c7bc1a5418       9 seconds ago       Created             kube-scheduler            1                   7b613e46eb5a9       kube-scheduler-kubernetes-upgrade-498000
	83173e7791650       2e96e5913fc06       9 seconds ago       Created             etcd                      1                   a48c73485fe49       etcd-kubernetes-upgrade-498000
	7a9425f760fae       9499c9960544e       10 seconds ago      Created             kube-apiserver            1                   ad8feee15ae1e       kube-apiserver-kubernetes-upgrade-498000
	8b6c0717f24de       c69fa2e9cbf5f       10 minutes ago      Exited              coredns                   0                   683fa5b18e68b       coredns-7c65d6cfc9-d6pvt
	a947d4a5e9461       c69fa2e9cbf5f       10 minutes ago      Exited              coredns                   0                   6aa31a86f4e1f       coredns-7c65d6cfc9-h89l5
	
	
	==> coredns [01e2693a737a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [217c143e8f53] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8b6c0717f24d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1691102535]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.754) (total time: 30001ms):
	Trace[1691102535]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:07:32.755)
	Trace[1691102535]: [30.001811613s] [30.001811613s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[352324107]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.754) (total time: 30002ms):
	Trace[352324107]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:07:32.755)
	Trace[352324107]: [30.002619774s] [30.002619774s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2121884389]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.754) (total time: 30003ms):
	Trace[2121884389]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:07:32.756)
	Trace[2121884389]: [30.003213419s] [30.003213419s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a947d4a5e946] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[127044346]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.700) (total time: 30000ms):
	Trace[127044346]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:07:32.701)
	Trace[127044346]: [30.000721669s] [30.000721669s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[82129819]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.700) (total time: 30000ms):
	Trace[82129819]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:07:32.701)
	Trace[82129819]: [30.000493927s] [30.000493927s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[795825684]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 19:07:02.700) (total time: 30000ms):
	Trace[795825684]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:07:32.701)
	Trace[795825684]: [30.000880332s] [30.000880332s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-498000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-498000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=kubernetes-upgrade-498000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T11_06_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 19:06:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-498000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 19:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 19:17:38 +0000   Tue, 05 Nov 2024 19:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 19:17:38 +0000   Tue, 05 Nov 2024 19:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 19:17:38 +0000   Tue, 05 Nov 2024 19:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 19:17:38 +0000   Tue, 05 Nov 2024 19:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.22
	  Hostname:    kubernetes-upgrade-498000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 de1be3ecd5ab476d926e3f4cc2791c91
	  System UUID:                b9614238-0000-0000-822d-4bd7795e3c6b
	  Boot ID:                    6ceb544c-7b4d-4f36-aea5-52ca4a905cb7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-d6pvt                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 coredns-7c65d6cfc9-h89l5                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-kubernetes-upgrade-498000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kube-apiserver-kubernetes-upgrade-498000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-kubernetes-upgrade-498000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fr96x                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-kubernetes-upgrade-498000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2s               kube-proxy       
	  Normal  Starting                 10m              kube-proxy       
	  Normal  NodeAllocatableEnforced  10m              kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m              kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m              kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m              kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m              kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m              node-controller  Node kubernetes-upgrade-498000 event: Registered Node kubernetes-upgrade-498000 in Controller
	  Normal  CIDRAssignmentFailed     10m              cidrAllocator    Node kubernetes-upgrade-498000 status is now: CIDRAssignmentFailed
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)  kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)  kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)  kubelet          Node kubernetes-upgrade-498000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           1s               node-controller  Node kubernetes-upgrade-498000 event: Registered Node kubernetes-upgrade-498000 in Controller
	
	
	==> dmesg <==
	[  +0.130257] systemd-fstab-generator[1244]: Ignoring "noauto" option for root device
	[  +3.357970] systemd-fstab-generator[1404]: Ignoring "noauto" option for root device
	[  +0.054494] kauditd_printk_skb: 239 callbacks suppressed
	[  +2.489826] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +4.490443] systemd-fstab-generator[1834]: Ignoring "noauto" option for root device
	[  +0.052613] kauditd_printk_skb: 70 callbacks suppressed
	[  +4.978070] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.069744] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.254579] systemd-fstab-generator[2322]: Ignoring "noauto" option for root device
	[Nov 5 19:07] kauditd_printk_skb: 34 callbacks suppressed
	[ +30.173498] kauditd_printk_skb: 76 callbacks suppressed
	[Nov 5 19:17] systemd-fstab-generator[5063]: Ignoring "noauto" option for root device
	[  +0.279977] systemd-fstab-generator[5097]: Ignoring "noauto" option for root device
	[  +0.140892] systemd-fstab-generator[5109]: Ignoring "noauto" option for root device
	[  +0.161573] systemd-fstab-generator[5123]: Ignoring "noauto" option for root device
	[  +5.201163] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.476266] systemd-fstab-generator[5808]: Ignoring "noauto" option for root device
	[  +0.118316] systemd-fstab-generator[5820]: Ignoring "noauto" option for root device
	[  +0.109361] systemd-fstab-generator[5832]: Ignoring "noauto" option for root device
	[  +0.127833] systemd-fstab-generator[5848]: Ignoring "noauto" option for root device
	[  +0.416121] systemd-fstab-generator[6015]: Ignoring "noauto" option for root device
	[  +2.434168] systemd-fstab-generator[6781]: Ignoring "noauto" option for root device
	[  +0.057136] kauditd_printk_skb: 183 callbacks suppressed
	[  +5.641159] systemd-fstab-generator[7311]: Ignoring "noauto" option for root device
	[  +0.080022] kauditd_printk_skb: 52 callbacks suppressed
	
	
	==> etcd [0c37f0aba723] <==
	{"level":"info","ts":"2024-11-05T19:17:35.325798Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:17:35.325872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:17:35.325948Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-11-05T19:17:35.326246Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.22:2380"}
	{"level":"info","ts":"2024-11-05T19:17:35.326350Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.22:2380"}
	{"level":"info","ts":"2024-11-05T19:17:35.327179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 switched to configuration voters=(2181649584734546038)"}
	{"level":"info","ts":"2024-11-05T19:17:35.327737Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fcba3ed0b428d5be","local-member-id":"1e46c6bd0a7b2876","added-peer-id":"1e46c6bd0a7b2876","added-peer-peer-urls":["https://192.169.0.22:2380"]}
	{"level":"info","ts":"2024-11-05T19:17:35.327982Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fcba3ed0b428d5be","local-member-id":"1e46c6bd0a7b2876","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:17:35.328133Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-11-05T19:17:36.812001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 is starting a new election at term 2"}
	{"level":"info","ts":"2024-11-05T19:17:36.812064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-11-05T19:17:36.812088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 received MsgPreVoteResp from 1e46c6bd0a7b2876 at term 2"}
	{"level":"info","ts":"2024-11-05T19:17:36.812122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 became candidate at term 3"}
	{"level":"info","ts":"2024-11-05T19:17:36.812167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 received MsgVoteResp from 1e46c6bd0a7b2876 at term 3"}
	{"level":"info","ts":"2024-11-05T19:17:36.812181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1e46c6bd0a7b2876 became leader at term 3"}
	{"level":"info","ts":"2024-11-05T19:17:36.812190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1e46c6bd0a7b2876 elected leader 1e46c6bd0a7b2876 at term 3"}
	{"level":"info","ts":"2024-11-05T19:17:36.813471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:17:36.813407Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1e46c6bd0a7b2876","local-member-attributes":"{Name:kubernetes-upgrade-498000 ClientURLs:[https://192.169.0.22:2379]}","request-path":"/0/members/1e46c6bd0a7b2876/attributes","cluster-id":"fcba3ed0b428d5be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-11-05T19:17:36.814556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-11-05T19:17:36.814592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:17:36.816139Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.22:2379"}
	{"level":"info","ts":"2024-11-05T19:17:36.816226Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-11-05T19:17:36.817230Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-11-05T19:17:36.815016Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-11-05T19:17:36.818724Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [83173e779165] <==
	
	
	==> kernel <==
	 19:17:43 up 11 min,  0 users,  load average: 0.59, 0.31, 0.16
	Linux kubernetes-upgrade-498000 5.10.207 #1 SMP Wed Oct 30 13:38:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [790445453e93] <==
	I1105 19:17:37.951344       1 policy_source.go:224] refreshing policies
	I1105 19:17:37.954271       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 19:17:37.954467       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 19:17:37.957764       1 aggregator.go:171] initial CRD sync complete...
	I1105 19:17:37.957801       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 19:17:37.957839       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 19:17:37.957849       1 cache.go:39] Caches are synced for autoregister controller
	I1105 19:17:37.965649       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 19:17:38.034645       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 19:17:38.035547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1105 19:17:38.035609       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 19:17:38.035614       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 19:17:38.035798       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 19:17:38.035894       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 19:17:38.039373       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 19:17:38.040914       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 19:17:38.837741       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1105 19:17:39.053508       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.22]
	I1105 19:17:39.055499       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 19:17:39.134664       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1105 19:17:39.143400       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1105 19:17:39.164757       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1105 19:17:39.316457       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1105 19:17:39.321209       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1105 19:17:41.576789       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7a9425f760fa] <==
	
	
	==> kube-controller-manager [04f07fa1f000] <==
	I1105 19:17:41.268291       1 shared_informer.go:320] Caches are synced for stateful set
	I1105 19:17:41.268299       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1105 19:17:41.268303       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1105 19:17:41.268306       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1105 19:17:41.275926       1 shared_informer.go:320] Caches are synced for HPA
	I1105 19:17:41.278200       1 shared_informer.go:320] Caches are synced for cronjob
	I1105 19:17:41.285234       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1105 19:17:41.321675       1 shared_informer.go:320] Caches are synced for attach detach
	I1105 19:17:41.372883       1 shared_informer.go:320] Caches are synced for resource quota
	I1105 19:17:41.379200       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1105 19:17:41.380817       1 shared_informer.go:320] Caches are synced for taint
	I1105 19:17:41.380931       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1105 19:17:41.381117       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-498000"
	I1105 19:17:41.381201       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1105 19:17:41.389411       1 shared_informer.go:320] Caches are synced for crt configmap
	I1105 19:17:41.426945       1 shared_informer.go:320] Caches are synced for resource quota
	I1105 19:17:41.455026       1 shared_informer.go:320] Caches are synced for persistent volume
	I1105 19:17:41.467514       1 shared_informer.go:320] Caches are synced for daemon sets
	I1105 19:17:41.635587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="367.609246ms"
	I1105 19:17:41.636817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.139978ms"
	I1105 19:17:41.904559       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 19:17:41.967795       1 shared_informer.go:320] Caches are synced for garbage collector
	I1105 19:17:41.967816       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1105 19:17:42.751673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.254496ms"
	I1105 19:17:42.752097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.106µs"
	
	
	==> kube-controller-manager [b3231458b403] <==
	
	
	==> kube-proxy [0db273535a5d] <==
	
	
	==> kube-proxy [740ffd1ce314] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1105 19:17:40.530463       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1105 19:17:40.539903       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.22"]
	E1105 19:17:40.539962       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 19:17:40.592676       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1105 19:17:40.592723       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1105 19:17:40.592741       1 server_linux.go:169] "Using iptables Proxier"
	I1105 19:17:40.596153       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 19:17:40.596353       1 server.go:483] "Version info" version="v1.31.2"
	I1105 19:17:40.596382       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:17:40.598235       1 config.go:199] "Starting service config controller"
	I1105 19:17:40.598296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 19:17:40.598315       1 config.go:105] "Starting endpoint slice config controller"
	I1105 19:17:40.598318       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 19:17:40.599053       1 config.go:328] "Starting node config controller"
	I1105 19:17:40.599078       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 19:17:40.698670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 19:17:40.698728       1 shared_informer.go:320] Caches are synced for service config
	I1105 19:17:40.699859       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6c45e2c8c11d] <==
	
	
	==> kube-scheduler [ed0cace8da60] <==
	I1105 19:17:35.816243       1 serving.go:386] Generated self-signed cert in-memory
	W1105 19:17:37.922303       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1105 19:17:37.922482       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1105 19:17:37.922668       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1105 19:17:37.922807       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1105 19:17:37.994958       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1105 19:17:37.995329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 19:17:37.998986       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1105 19:17:37.999217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1105 19:17:38.006344       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1105 19:17:37.999229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1105 19:17:38.107299       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 19:17:38 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:38.032715    6788 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 05 19:17:38 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:38.079964    6788 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1f3bfba1-5ccd-4916-98f4-ca68037ae457-tmp\") pod \"storage-provisioner\" (UID: \"1f3bfba1-5ccd-4916-98f4-ca68037ae457\") " pod="kube-system/storage-provisioner"
	Nov 05 19:17:38 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:38.080464    6788 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7ba4af2-b6ed-40e8-9290-cc14aaa831a8-xtables-lock\") pod \"kube-proxy-fr96x\" (UID: \"e7ba4af2-b6ed-40e8-9290-cc14aaa831a8\") " pod="kube-system/kube-proxy-fr96x"
	Nov 05 19:17:38 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:38.080505    6788 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7ba4af2-b6ed-40e8-9290-cc14aaa831a8-lib-modules\") pod \"kube-proxy-fr96x\" (UID: \"e7ba4af2-b6ed-40e8-9290-cc14aaa831a8\") " pod="kube-system/kube-proxy-fr96x"
	Nov 05 19:17:38 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:38.124020    6788 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-498000\" already exists" pod="kube-system/etcd-kubernetes-upgrade-498000"
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.080870    6788 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.080967    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff9c9b43-a44a-4463-9ee8-ffd227c192b9-config-volume podName:ff9c9b43-a44a-4463-9ee8-ffd227c192b9 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.580952286 +0000 UTC m=+5.761755727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ff9c9b43-a44a-4463-9ee8-ffd227c192b9-config-volume") pod "coredns-7c65d6cfc9-h89l5" (UID: "ff9c9b43-a44a-4463-9ee8-ffd227c192b9") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.080870    6788 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.080999    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e4bb9bc4-b09a-4f29-98c8-aa1860e15d14-config-volume podName:e4bb9bc4-b09a-4f29-98c8-aa1860e15d14 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.580993884 +0000 UTC m=+5.761797326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e4bb9bc4-b09a-4f29-98c8-aa1860e15d14-config-volume") pod "coredns-7c65d6cfc9-d6pvt" (UID: "e4bb9bc4-b09a-4f29-98c8-aa1860e15d14") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.089668    6788 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.089718    6788 projected.go:194] Error preparing data for projected volume kube-api-access-74xw7 for pod kube-system/coredns-7c65d6cfc9-d6pvt: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.089793    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e4bb9bc4-b09a-4f29-98c8-aa1860e15d14-kube-api-access-74xw7 podName:e4bb9bc4-b09a-4f29-98c8-aa1860e15d14 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.589779843 +0000 UTC m=+5.770583284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-74xw7" (UniqueName: "kubernetes.io/projected/e4bb9bc4-b09a-4f29-98c8-aa1860e15d14-kube-api-access-74xw7") pod "coredns-7c65d6cfc9-d6pvt" (UID: "e4bb9bc4-b09a-4f29-98c8-aa1860e15d14") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.090677    6788 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.090710    6788 projected.go:194] Error preparing data for projected volume kube-api-access-xhxjw for pod kube-system/kube-proxy-fr96x: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.090741    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7ba4af2-b6ed-40e8-9290-cc14aaa831a8-kube-api-access-xhxjw podName:e7ba4af2-b6ed-40e8-9290-cc14aaa831a8 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.590732394 +0000 UTC m=+5.771535833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xhxjw" (UniqueName: "kubernetes.io/projected/e7ba4af2-b6ed-40e8-9290-cc14aaa831a8-kube-api-access-xhxjw") pod "kube-proxy-fr96x" (UID: "e7ba4af2-b6ed-40e8-9290-cc14aaa831a8") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093013    6788 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093035    6788 projected.go:194] Error preparing data for projected volume kube-api-access-lt5f9 for pod kube-system/coredns-7c65d6cfc9-h89l5: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093063    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff9c9b43-a44a-4463-9ee8-ffd227c192b9-kube-api-access-lt5f9 podName:ff9c9b43-a44a-4463-9ee8-ffd227c192b9 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.593054633 +0000 UTC m=+5.773858072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lt5f9" (UniqueName: "kubernetes.io/projected/ff9c9b43-a44a-4463-9ee8-ffd227c192b9-kube-api-access-lt5f9") pod "coredns-7c65d6cfc9-h89l5" (UID: "ff9c9b43-a44a-4463-9ee8-ffd227c192b9") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093106    6788 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093117    6788 projected.go:194] Error preparing data for projected volume kube-api-access-4glxh for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:39 kubernetes-upgrade-498000 kubelet[6788]: E1105 19:17:39.093137    6788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f3bfba1-5ccd-4916-98f4-ca68037ae457-kube-api-access-4glxh podName:1f3bfba1-5ccd-4916-98f4-ca68037ae457 nodeName:}" failed. No retries permitted until 2024-11-05 19:17:39.593131161 +0000 UTC m=+5.773934600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4glxh" (UniqueName: "kubernetes.io/projected/1f3bfba1-5ccd-4916-98f4-ca68037ae457-kube-api-access-4glxh") pod "storage-provisioner" (UID: "1f3bfba1-5ccd-4916-98f4-ca68037ae457") : failed to sync configmap cache: timed out waiting for the condition
	Nov 05 19:17:40 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:40.343719    6788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7457d5cb5b3e0aa1b51b12bad2657e18686b1399ac36c1344891fed8a55b4d0"
	Nov 05 19:17:40 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:40.410982    6788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8072a62cbee5e796fd7165c68e1234faad8b1405a986d19931854bd32b2a1e10"
	Nov 05 19:17:42 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:42.467383    6788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 05 19:17:42 kubernetes-upgrade-498000 kubelet[6788]: I1105 19:17:42.467579    6788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [779bf240736c] <==
	I1105 19:17:40.317077       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 19:17:40.334501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 19:17:40.334554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b4a1d1bb41c6] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-498000 -n kubernetes-upgrade-498000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-498000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-498000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-498000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-498000: (5.273508447s)
--- FAIL: TestKubernetesUpgrade (1341.75s)

                                                
                                    
x
+
TestPause/serial/Start (141.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-721000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-721000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.327392258s)

                                                
                                                
-- stdout --
	* [pause-721000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-721000" primary control-plane node in "pause-721000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-721000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:1b:e3:7c:db:0c
	* Failed to start hyperkit VM. Running "minikube delete -p pause-721000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:c5:b7:eb:f6:c5
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3e:c5:b7:eb:f6:c5
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-721000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-721000 -n pause-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-721000 -n pause-721000: exit status 7 (113.249594ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 11:20:11.390317   24831 status.go:393] failed to get driver ip: getting IP: IP address is not set
	E1105 11:20:11.390341   24831 status.go:119] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-721000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (109.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-081000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-081000 --driver=hyperkit : exit status 90 (1m49.65648518s)

                                                
                                                
-- stdout --
	* [NoKubernetes-081000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "NoKubernetes-081000" primary control-plane node in "NoKubernetes-081000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Nov 05 19:20:24 NoKubernetes-081000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:25.007130712Z" level=info msg="Starting up"
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:25.007607380Z" level=info msg="containerd not running, starting managed containerd"
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:25.008285228Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.023064663Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038240630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038335124Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038402327Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038437364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038514068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038551495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038698042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038743172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038775141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038804456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.038887947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.039091123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.040682807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.040735622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.040878690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.040921732Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.041014440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.041083716Z" level=info msg="metadata content store policy set" policy=shared
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043611754Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043695697Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043741928Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043776050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043812454Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.043901538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044120225Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044256626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044298199Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044334441Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044371014Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044410387Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044443991Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044474469Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044510114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044549939Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044585180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044613835Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044649148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044679715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044726807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044763761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044794994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044834131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044867438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044902525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044935956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044966678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.044995354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045023935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045055332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045088804Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045130472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045165594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045227875Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045312591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045358196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045388897Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045453219Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045486958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045519536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045550620Z" level=info msg="NRI interface is disabled by configuration."
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045720461Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045807182Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045868800Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Nov 05 19:20:25 NoKubernetes-081000 dockerd[516]: time="2024-11-05T19:20:25.045903666Z" level=info msg="containerd successfully booted in 0.023447s"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.029953023Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.034453237Z" level=info msg="Loading containers: start."
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.119255993Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.201639686Z" level=info msg="Loading containers: done."
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.212263347Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.212297044Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.212319842Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.212385645Z" level=info msg="Daemon has completed initialization"
	Nov 05 19:20:26 NoKubernetes-081000 systemd[1]: Started Docker Application Container Engine.
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.238640616Z" level=info msg="API listen on [::]:2376"
	Nov 05 19:20:26 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:26.238748722Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 05 19:20:27 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:27.262685065Z" level=info msg="Processing signal 'terminated'"
	Nov 05 19:20:27 NoKubernetes-081000 systemd[1]: Stopping Docker Application Container Engine...
	Nov 05 19:20:27 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:27.263826164Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 05 19:20:27 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:27.264118457Z" level=info msg="Daemon shutdown complete"
	Nov 05 19:20:27 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:27.264157129Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 05 19:20:27 NoKubernetes-081000 dockerd[509]: time="2024-11-05T19:20:27.264168756Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Nov 05 19:20:28 NoKubernetes-081000 systemd[1]: docker.service: Deactivated successfully.
	Nov 05 19:20:28 NoKubernetes-081000 systemd[1]: Stopped Docker Application Container Engine.
	Nov 05 19:20:28 NoKubernetes-081000 systemd[1]: Starting Docker Application Container Engine...
	Nov 05 19:20:28 NoKubernetes-081000 dockerd[915]: time="2024-11-05T19:20:28.293278173Z" level=info msg="Starting up"
	Nov 05 19:21:28 NoKubernetes-081000 dockerd[915]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Nov 05 19:21:28 NoKubernetes-081000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Nov 05 19:21:28 NoKubernetes-081000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Nov 05 19:21:28 NoKubernetes-081000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-081000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000: exit status 6 (180.648668ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 11:21:28.451787   24874 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-081000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-081000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (109.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (64.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --driver=hyperkit : (1m1.434225946s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-081000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-081000 status -o json: exit status 6 (176.728277ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-081000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 11:22:30.066193   25108 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-081000" does not appear in /Users/jenkins/minikube-integration/19910-17277/kubeconfig

                                                
                                                
** /stderr **
no_kubernetes_test.go:203: failed to run minikube status with json output. args "out/minikube-darwin-amd64 -p NoKubernetes-081000 status -o json" : exit status 6
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-081000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-081000: (2.453073879s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000: exit status 85 (132.805909ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-081000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-081000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-081000" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-081000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-081000\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-081000 -n NoKubernetes-081000: exit status 85 (192.740366ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-081000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-081000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-081000" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-081000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-081000\"")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (64.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7201.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-841000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.31.2
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (59m8s)
		TestNetworkPlugins/group (10m43s)
		TestStartStop (20m46s)
		TestStartStop/group/default-k8s-diff-port (1m57s)
		TestStartStop/group/default-k8s-diff-port/serial (1m57s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (17s)
		TestStartStop/group/embed-certs (4m20s)
		TestStartStop/group/embed-certs/serial (4m20s)
		TestStartStop/group/embed-certs/serial/SecondStart (3m12s)

                                                
                                                
goroutine 4154 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000750d00, 0xc00079fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000892138, {0xb2da1a0, 0x2a, 0x2a}, {0x64914d6?, 0xffffffffffffffff?, 0xb300fe0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc0005d7680)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0005d7680)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00063d180)
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 3011 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3010
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 111 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a9acc0, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 109
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3247 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3246
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3717 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001c5b750, 0xc001c5b798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x0?, 0xc001c5b750, 0xc001c5b798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x6a66a56?, 0xc001567b00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c5b7d0?, 0x660e164?, 0xc001ee4b40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3700
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3880 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001435f50, 0xc001435f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x63?, 0xc001435f50, 0xc001435f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x776f4c74706d6565?, 0x69726f6972507265?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x7469646e6f63222c?, 0x7b5b3a22736e6f69?, 0x223a226570797422?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 994 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001567800, 0xc00163aaf0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 993
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 110 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 109
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 884 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d85550, 0x29)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bf6d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d85580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009893b0, {0x99a0fa0, 0xc0009a7b90}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009893b0, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2719 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2718
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4129 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001b22610, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001aa1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b22700)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0023c8190, {0x99a0fa0, 0xc001b1a1b0}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023c8190, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4113
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 127 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000a9ac90, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0008c4d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a9acc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00088c030, {0x99a0fa0, 0xc00082a060}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00088c030, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 128 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc0008c7f50, 0xc0008c7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x0?, 0xc0008c7f50, 0xc0008c7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x95af4c0?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00050dfd0?, 0x687cd45?, 0xc00090c330?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 111
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 129 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 128
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3022 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00213a780, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3017
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3812 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3826
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2718 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc0014d1f50, 0xc0014d1f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x10?, 0xc0014d1f50, 0xc0014d1f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc00098c680?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014d1fd0?, 0x660e164?, 0xc001536480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3576 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3575
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3315 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3314
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3246 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc0014d2750, 0xc0014d2798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x90?, 0xc0014d2750, 0xc0014d2798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc001528820?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x660e105?, 0xc000a68a80?, 0xc001cdcd90?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3232
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2874 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2873
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3004 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00213a750, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001444d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00213a780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001f3ac60, {0x99a0fa0, 0xc0014f5470}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001f3ac60, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3022
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 886 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 885
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2700 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0000fe1a0, {0x8468884?, 0x0?}, 0xc0005f9980)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fe1a0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0000fe1a0, 0xc001a82380)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2697
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2873 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc0000b9f50, 0xc0000b9f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x48?, 0xc0000b9f50, 0xc0000b9f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc00147dba0?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000b9fd0?, 0x660e164?, 0xc001e7f4a0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1321 [select, 101 minutes]:
net/http.(*persistConn).writeLoop(0xc0021aea20)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 1337
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 3232 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d84500, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3241
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2128 [chan receive, 59 minutes]:
testing.(*T).Run(0xc00055f040, {0x84674aa?, 0x8692139f13c?}, 0xc001603350)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00055f040)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc00055f040, 0x99921d8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4035 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x52c97da8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0008f6660?, 0xc00147d499?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008f6660, {0xc00147d499, 0x367, 0x367})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001f766d8, {0xc00147d499?, 0x660c207?, 0x230?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0015b5d10, {0x999f3e8, 0xc001e4c410})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x999f580, 0xc0015b5d10}, {0x999f3e8, 0xc001e4c410}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xb1b6ee0?, {0x999f580, 0xc0015b5d10})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf?, {0x999f580?, 0xc0015b5d10?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x999f580, 0xc0015b5d10}, {0x999f4e0, 0xc001f766d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001563080?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4034
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 665 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0x52c98c18, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0005c9e80?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0005c9e80)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0005c9e80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001d84700)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001d84700)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc0002670e0, {0x99cca60, 0xc001d84700})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc0002670e0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00055e680?, 0xc00055eb60)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 662
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3851 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a83b40, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3875
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3813 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00173a380, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3826
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2699 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0008b2be0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000fe000)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000fe000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fe000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0000fe000, 0xc001a82340)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2697
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 885 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001c5b750, 0xc001447f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0xc0?, 0xc001c5b750, 0xc001c5b798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xd68948f189481010?, 0x448948fffffab7e9?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c5b7d0?, 0x660e164?, 0xc001c521c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 864
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3831 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001aa3750, 0xc001aa3798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x7?, 0xc001aa3750, 0xc001aa3798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc00098cb60?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001aa37d0?, 0x660e164?, 0xc00082b6b0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2222 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc0008b2be0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc00099b1e0, 0xc001603350)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2128
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1202 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e5f500, 0xc001fa8af0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1201
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3575 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc000508750, 0xc000508798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x10?, 0xc000508750, 0xc000508798)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc001528000?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005087d0?, 0x660e164?, 0xc00175c120?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3560
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3560 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000821640, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3558
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2703 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00098c000, {0x8468884?, 0x0?}, 0xc001e51700)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00098c000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00098c000, 0xc001a82480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2697
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3850 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3875
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 864 [chan receive, 101 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d85580, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3262 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00213a580, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3299
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3716 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000a9af50, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014a4d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a9b000)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a88980, {0x99a0fa0, 0xc001958ed0}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a88980, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3700
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 863 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 794
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3009 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000820310, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bf0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000821240)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000910610, {0x99a0fa0, 0xc001538180}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000910610, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2997
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2853 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00213a6c0, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3879 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001a83b10, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00079dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a83b40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001a89cf0, {0x99a0fa0, 0xc001e6cb70}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001a89cf0, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3830 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00173a350, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00079bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00173a380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018c2220, {0x99a0fa0, 0xc0015262a0}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018c2220, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3245 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001d844d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00148dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d84500)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b0a740, {0x99a0fa0, 0xc001cda810}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b0a740, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3232
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3314 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001aa2f50, 0xc001aa2f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x80?, 0xc001aa2f50, 0xc001aa2f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x660e105?, 0xc001afec00?, 0xc001cdca80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3262
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3699 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3696
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2230 [chan receive, 22 minutes]:
testing.(*T).Run(0xc00055f520, {0x84674aa?, 0x65ce933?}, 0x9992398)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00055f520)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00055f520, 0x9992220)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4131 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4130
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2202 [chan receive, 59 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00213a300, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2168
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1320 [select, 101 minutes]:
net/http.(*persistConn).readLoop(0xc0021aea20)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 1337
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3452 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3448
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2723 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2687
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4037 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017f7680, 0xc001cdd8f0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4034
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2724 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a82780, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2687
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2201 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2168
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1247 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc001fdf980, 0xc0021bc770)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1246
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2192 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001448f50, 0xc001448f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x0?, 0xc001448f50, 0xc001448f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc000a68000?, 0xc000668210?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c59fd0?, 0x660e164?, 0x1?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2202
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2191 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00213a2d0, 0x1e)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001445d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00213a300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001eca000, {0x99a0fa0, 0xc001606030}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001eca000, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2202
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2209 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2192
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3574 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000821610, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bf5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000821640)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0023c8410, {0x99a0fa0, 0xc0014f57a0}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023c8410, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3560
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2697 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00055f6c0, 0x9992398)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 2230
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3005 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc0014d0f50, 0xc0014d0f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x70?, 0xc0014d0f50, 0xc0014d0f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x6a66a56?, 0xc001565800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x660e105?, 0xc001d40600?, 0xc0021bce70?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3022
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3467 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001a826d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001433d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a827c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00092a990, {0x99a0fa0, 0xc001f44270}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00092a990, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3453
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3010 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc000508f50, 0xc000508f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x10?, 0xc000508f50, 0xc000508f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0x10000c00055e9c0?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000508fd0?, 0x6a74e25?, 0xc001528680?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2997
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1283 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc002160f00, 0xc001f715e0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 765
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3468 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001c5bf50, 0xc00148af98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x60?, 0xc001c5bf50, 0xc001c5bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc00098d040?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c5bfd0?, 0x660e164?, 0xc001607920?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3453
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4036 [IO wait]:
internal/poll.runtime_pollWait(0x52c985e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0008f6720?, 0xc001a1adbe?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008f6720, {0xc001a1adbe, 0x1d242, 0x1d242})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001f76720, {0xc001a1adbe?, 0x10?, 0x1fe89?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0015b5d40, {0x999f3e8, 0xc001e4c418})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x999f580, 0xc0015b5d40}, {0x999f3e8, 0xc001e4c418}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00098c680?, {0x999f580, 0xc0015b5d40})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf?, {0x999f580?, 0xc0015b5d40?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x999f580, 0xc0015b5d40}, {0x999f4e0, 0xc001f76720}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001e51a80?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4034
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3231 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3241
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3453 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a827c0, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3448
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3469 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3468
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3718 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3717
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3021 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3017
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4034 [syscall, 3 minutes]:
syscall.syscall6(0x52c76fa8?, 0x90?, 0xc0014a1bf8?, 0xc045108?, 0x90?, 0x1000006496fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc0014a1bb8?, 0x6492ac5?, 0x90?, 0x9907720?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc0004e4230?, 0xc0014a1bec, 0xc002129c38?, 0xc0018c2140?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc0009b3fc0)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0x64dd3d9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0017f7680)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0017f7680)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015281a0, 0xc0017f7680)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x99d97d8, 0xc0003df2d0}, 0xc0015281a0, {0xc0016e41c8, 0x12}, {0x41063d80162f758?, 0xc00162f760?}, {0x65ce933?, 0x652f26f?}, {0xc0019f7400, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0015281a0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0015281a0, 0xc001563080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3967
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2717 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001a82750, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0017f9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a82780)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00097f390, {0x99a0fa0, 0xc001526750}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00097f390, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3559 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3558
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3832 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3831
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2997 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000821240, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2995
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3006 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3005
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4162 [IO wait]:
internal/poll.runtime_pollWait(0x53109ec0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0008f68a0?, 0xc001514258?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008f68a0, {0xc001514258, 0x5a8, 0x5a8})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e4c510, {0xc001514258?, 0x53071c18?, 0x20f?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001539350, {0x999f3e8, 0xc001f76aa0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x999f580, 0xc001539350}, {0x999f3e8, 0xc001f76aa0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xb1b6ee0?, {0x999f580, 0xc001539350})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xf?, {0x999f580?, 0xc001539350?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x999f580, 0xc001539350}, {0x999f4e0, 0xc001e4c510}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001ac5580?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4161
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2872 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00213a690, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bf3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00213a6c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0023c9090, {0x99a0fa0, 0xc0015b44b0}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023c9090, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2853
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2996 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2995
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2852 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2851
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3261 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3299
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3313 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00213a550, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000bf4d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x99f4a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00213a580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007af550, {0x99a0fa0, 0xc000832510}, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007af550, 0x3b9aca00, 0x0, 0x1, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3262
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4113 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b22700, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4108
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3700 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a9b000, 0xc00008a310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3696
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4161 [syscall]:
syscall.syscall6(0x52c76fa8?, 0x90?, 0xc00154abf8?, 0xc045108?, 0x90?, 0x1000006496fc5?, 0x19?)
	/usr/local/go/src/runtime/sys_darwin.go:60 +0x78
syscall.wait4(0xc00154abb8?, 0x6492ac5?, 0x90?, 0x9907720?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xc0003df340?, 0xc00154abec, 0xc002128060?, 0xc001a88530?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).pidWait(0xc001e74e80)
	/usr/local/go/src/os/exec_unix.go:70 +0x86
os.(*Process).wait(0x64dd3d9?)
	/usr/local/go/src/os/exec_unix.go:30 +0x1b
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001ec3380)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001ec3380)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc001528680, 0xc001ec3380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x99d97d8, 0xc00049c540}, 0xc001528680, {0xc0017e1e00, 0x1c}, {0x138f1c5001c5bf58?, 0xc001c5bf60?}, {0x65ce933?, 0x652f26f?}, {0xc000131000, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001528680)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001528680, 0xc001ac5580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4103
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3967 [chan receive, 3 minutes]:
testing.(*T).Run(0xc00098cd00, {0x84737ab?, 0xc00163d522?}, 0xc001563080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00098cd00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00098cd00, 0xc001e51700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2703
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3881 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3880
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4112 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x99cfda0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4108
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4130 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x99d9af0, 0xc00008a310}, 0xc001aa0f50, 0xc001aa0f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x99d9af0, 0xc00008a310}, 0x10?, 0xc001aa0f50, 0xc001aa0f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x99d9af0?, 0xc00008a310?}, 0xc00098d380?, 0x65cf240?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001aa0fd0?, 0x660e164?, 0xc000622a10?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4113
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4103 [chan receive]:
testing.(*T).Run(0xc0015289c0, {0x84737ab?, 0xc002227500?}, 0xc001ac5580)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0015289c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0015289c0, 0xc0005f9980)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2700
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4163 [IO wait]:
internal/poll.runtime_pollWait(0x5310a1d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0008f6960?, 0xc0015a0f38?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0008f6960, {0xc0015a0f38, 0x30c8, 0x30c8})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e4c528, {0xc0015a0f38?, 0xc00162c550?, 0x7e77?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001539380, {0x999f3e8, 0xc001f76aa8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x999f580, 0xc001539380}, {0x999f3e8, 0xc001f76aa8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00162c678?, {0x999f580, 0xc001539380})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00162c738?, {0x999f580?, 0xc001539380?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x999f580, 0xc001539380}, {0x999f4e0, 0xc001e4c528}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000623e30?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4161
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 4164 [select]:
os/exec.(*Cmd).watchCtx(0xc001ec3380, 0xc002266310)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4161
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                    

Test pass (182/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.26
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.31.2/json-events 9.39
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.35
18 TestDownloadOnly/v1.31.2/DeleteAll 0.26
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.18
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 336.23
29 TestAddons/serial/Volcano 40.15
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 15.15
36 TestAddons/parallel/Ingress 17.78
37 TestAddons/parallel/InspektorGadget 10.48
38 TestAddons/parallel/MetricsServer 5.51
40 TestAddons/parallel/CSI 51.58
41 TestAddons/parallel/Headlamp 17.45
42 TestAddons/parallel/CloudSpanner 5.39
43 TestAddons/parallel/LocalPath 44.33
44 TestAddons/parallel/NvidiaDevicePlugin 5.39
45 TestAddons/parallel/Yakd 10.48
47 TestAddons/StoppedEnableDisable 6.01
55 TestHyperKitDriverInstallOrUpdate 8.29
58 TestErrorSpam/setup 37.69
59 TestErrorSpam/start 1.75
60 TestErrorSpam/status 0.57
61 TestErrorSpam/pause 1.39
62 TestErrorSpam/unpause 1.53
63 TestErrorSpam/stop 155.9
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.79
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 66.43
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
75 TestFunctional/serial/CacheCmd/cache/add_local 1.33
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
77 TestFunctional/serial/CacheCmd/cache/list 0.09
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.16
80 TestFunctional/serial/CacheCmd/cache/delete 0.19
81 TestFunctional/serial/MinikubeKubectlCmd 1.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.74
83 TestFunctional/serial/ExtraConfig 282.75
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 2.06
86 TestFunctional/serial/LogsFileCmd 2.28
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.66
90 TestFunctional/parallel/DashboardCmd 11.96
91 TestFunctional/parallel/DryRun 1.32
92 TestFunctional/parallel/InternationalLanguage 0.6
93 TestFunctional/parallel/StatusCmd 0.64
97 TestFunctional/parallel/ServiceCmdConnect 11.43
98 TestFunctional/parallel/AddonsCmd 0.26
99 TestFunctional/parallel/PersistentVolumeClaim 29.59
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1.11
103 TestFunctional/parallel/MySQL 25.95
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.23
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.15
113 TestFunctional/parallel/License 0.64
114 TestFunctional/parallel/Version/short 0.17
115 TestFunctional/parallel/Version/components 0.4
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.15
121 TestFunctional/parallel/ImageCommands/Setup 1.72
122 TestFunctional/parallel/DockerEnv/bash 0.68
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.68
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.47
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.15
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.05
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.14
145 TestFunctional/parallel/ServiceCmd/List 0.79
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.79
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
148 TestFunctional/parallel/ServiceCmd/Format 0.46
149 TestFunctional/parallel/ServiceCmd/URL 0.48
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
151 TestFunctional/parallel/ProfileCmd/profile_list 0.32
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
153 TestFunctional/parallel/MountCmd/any-port 5.99
154 TestFunctional/parallel/MountCmd/specific-port 1.88
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 222.13
163 TestMultiControlPlane/serial/DeployApp 5.98
164 TestMultiControlPlane/serial/PingHostFromPods 1.39
165 TestMultiControlPlane/serial/AddWorkerNode 50.29
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
168 TestMultiControlPlane/serial/CopyFile 10.3
169 TestMultiControlPlane/serial/StopSecondaryNode 8.78
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.43
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 225.66
174 TestMultiControlPlane/serial/DeleteSecondaryNode 7.76
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
176 TestMultiControlPlane/serial/StopCluster 25
183 TestImageBuild/serial/Setup 37.67
184 TestImageBuild/serial/NormalBuild 1.65
185 TestImageBuild/serial/BuildWithBuildArg 0.67
186 TestImageBuild/serial/BuildWithDockerIgnore 0.52
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.59
191 TestJSONOutput/start/Command 74.08
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.5
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.48
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 8.32
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.63
219 TestMainNoArgs 0.09
220 TestMinikubeProfile 104.09
226 TestMultiNode/serial/FreshStart2Nodes 108.73
227 TestMultiNode/serial/DeployApp2Nodes 4.49
228 TestMultiNode/serial/PingHostFrom2Pods 0.97
229 TestMultiNode/serial/AddNode 45.16
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.4
232 TestMultiNode/serial/CopyFile 5.97
233 TestMultiNode/serial/StopNode 2.93
234 TestMultiNode/serial/StartAfterStop 36.6
235 TestMultiNode/serial/RestartKeepsNodes 192.64
236 TestMultiNode/serial/DeleteNode 3.5
237 TestMultiNode/serial/StopMultiNode 16.85
238 TestMultiNode/serial/RestartMultiNode 100.39
239 TestMultiNode/serial/ValidateNameConflict 160.72
243 TestPreload 140.71
246 TestSkaffold 114.57
249 TestRunningBinaryUpgrade 90.07
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.38
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.71
266 TestStoppedBinaryUpgrade/Setup 2.08
267 TestStoppedBinaryUpgrade/Upgrade 126.46
270 TestStoppedBinaryUpgrade/MinikubeLogs 2.58
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.53
289 TestNoKubernetes/serial/Start 22.14
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.15
291 TestNoKubernetes/serial/ProfileList 0.64
292 TestNoKubernetes/serial/Stop 2.46
294 TestNoKubernetes/serial/StartNoArgs 21.09
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (18.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-444000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (18.862096381s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1105 09:40:42.195618   17842 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1105 09:40:42.195812   17842 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-444000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-444000: exit status 85 (303.877666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-444000 | jenkins | v1.34.0 | 05 Nov 24 09:40 PST |          |
	|         | -p download-only-444000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 09:40:23
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 09:40:23.399920   17843 out.go:345] Setting OutFile to fd 1 ...
	I1105 09:40:23.400144   17843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 09:40:23.400149   17843 out.go:358] Setting ErrFile to fd 2...
	I1105 09:40:23.400153   17843 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 09:40:23.400327   17843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	W1105 09:40:23.400431   17843 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19910-17277/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19910-17277/.minikube/config/config.json: no such file or directory
	I1105 09:40:23.402463   17843 out.go:352] Setting JSON to true
	I1105 09:40:23.430557   17843 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5992,"bootTime":1730822431,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 09:40:23.430731   17843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 09:40:23.452750   17843 out.go:97] [download-only-444000] minikube v1.34.0 on Darwin 15.0.1
	I1105 09:40:23.452990   17843 notify.go:220] Checking for updates...
	W1105 09:40:23.452988   17843 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball: no such file or directory
	I1105 09:40:23.474150   17843 out.go:169] MINIKUBE_LOCATION=19910
	I1105 09:40:23.497525   17843 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 09:40:23.519393   17843 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 09:40:23.540066   17843 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 09:40:23.561450   17843 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	W1105 09:40:23.603182   17843 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 09:40:23.603645   17843 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 09:40:23.636494   17843 out.go:97] Using the hyperkit driver based on user configuration
	I1105 09:40:23.636547   17843 start.go:297] selected driver: hyperkit
	I1105 09:40:23.636564   17843 start.go:901] validating driver "hyperkit" against <nil>
	I1105 09:40:23.636799   17843 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 09:40:23.637103   17843 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 09:40:24.021158   17843 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 09:40:24.028659   17843 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 09:40:24.028678   17843 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 09:40:24.028703   17843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 09:40:24.033889   17843 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1105 09:40:24.034047   17843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 09:40:24.034078   17843 cni.go:84] Creating CNI manager for ""
	I1105 09:40:24.034146   17843 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1105 09:40:24.034263   17843 start.go:340] cluster config:
	{Name:download-only-444000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-444000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 09:40:24.034547   17843 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 09:40:24.057105   17843 out.go:97] Downloading VM boot image ...
	I1105 09:40:24.057196   17843 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/iso/amd64/minikube-v1.34.0-1730282777-19883-amd64.iso
	I1105 09:40:30.984917   17843 out.go:97] Starting "download-only-444000" primary control-plane node in "download-only-444000" cluster
	I1105 09:40:30.984956   17843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1105 09:40:31.050726   17843 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1105 09:40:31.050743   17843 cache.go:56] Caching tarball of preloaded images
	I1105 09:40:31.051000   17843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1105 09:40:31.072729   17843 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1105 09:40:31.072783   17843 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1105 09:40:31.165814   17843 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-444000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-444000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-444000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (9.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-936000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-936000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=hyperkit : (9.387688155s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (9.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1105 09:40:52.379877   17842 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1105 09:40:52.379917   17842 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-936000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-936000: exit status 85 (350.19775ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-444000 | jenkins | v1.34.0 | 05 Nov 24 09:40 PST |                     |
	|         | -p download-only-444000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Nov 24 09:40 PST | 05 Nov 24 09:40 PST |
	| delete  | -p download-only-444000        | download-only-444000 | jenkins | v1.34.0 | 05 Nov 24 09:40 PST | 05 Nov 24 09:40 PST |
	| start   | -o=json --download-only        | download-only-936000 | jenkins | v1.34.0 | 05 Nov 24 09:40 PST |                     |
	|         | -p download-only-936000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 09:40:43
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 09:40:43.057220   17873 out.go:345] Setting OutFile to fd 1 ...
	I1105 09:40:43.057422   17873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 09:40:43.057427   17873 out.go:358] Setting ErrFile to fd 2...
	I1105 09:40:43.057431   17873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 09:40:43.057594   17873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 09:40:43.059182   17873 out.go:352] Setting JSON to true
	I1105 09:40:43.086846   17873 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6012,"bootTime":1730822431,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 09:40:43.087015   17873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 09:40:43.108517   17873 out.go:97] [download-only-936000] minikube v1.34.0 on Darwin 15.0.1
	I1105 09:40:43.108763   17873 notify.go:220] Checking for updates...
	I1105 09:40:43.130219   17873 out.go:169] MINIKUBE_LOCATION=19910
	I1105 09:40:43.151322   17873 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 09:40:43.173443   17873 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 09:40:43.194332   17873 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 09:40:43.216444   17873 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	W1105 09:40:43.258158   17873 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 09:40:43.258580   17873 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 09:40:43.291364   17873 out.go:97] Using the hyperkit driver based on user configuration
	I1105 09:40:43.291434   17873 start.go:297] selected driver: hyperkit
	I1105 09:40:43.291452   17873 start.go:901] validating driver "hyperkit" against <nil>
	I1105 09:40:43.291678   17873 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 09:40:43.291957   17873 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19910-17277/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1105 09:40:43.304129   17873 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I1105 09:40:43.310364   17873 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 09:40:43.310399   17873 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1105 09:40:43.310427   17873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 09:40:43.315331   17873 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1105 09:40:43.315476   17873 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 09:40:43.315505   17873 cni.go:84] Creating CNI manager for ""
	I1105 09:40:43.315551   17873 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1105 09:40:43.315564   17873 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1105 09:40:43.315631   17873 start.go:340] cluster config:
	{Name:download-only-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 09:40:43.315717   17873 iso.go:125] acquiring lock: {Name:mka3d5e234f2ff3441663646bb1b78ffeeb4e52b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 09:40:43.337292   17873 out.go:97] Starting "download-only-936000" primary control-plane node in "download-only-936000" cluster
	I1105 09:40:43.337327   17873 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 09:40:43.401985   17873 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	I1105 09:40:43.402053   17873 cache.go:56] Caching tarball of preloaded images
	I1105 09:40:43.402568   17873 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1105 09:40:43.424324   17873 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1105 09:40:43.424385   17873 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4 ...
	I1105 09:40:43.524394   17873 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4?checksum=md5:979f32540b837894423b337fec69fbf6 -> /Users/jenkins/minikube-integration/19910-17277/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-936000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-936000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-936000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
I1105 09:40:53.696794   17842 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-993000 --alsologtostderr --binary-mirror http://127.0.0.1:56260 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-993000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-133000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-133000: exit status 85 (181.151958ms)

                                                
                                                
-- stdout --
	* Profile "addons-133000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.18s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-133000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-133000: exit status 85 (202.624438ms)

                                                
                                                
-- stdout --
	* Profile "addons-133000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (336.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-133000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-133000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (5m36.226300949s)
--- PASS: TestAddons/Setup (336.23s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.15s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 11.496267ms
addons_test.go:807: volcano-scheduler stabilized in 11.528671ms
addons_test.go:815: volcano-admission stabilized in 11.677977ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-6qmbg" [70d35c2d-b9ff-44f8-95da-ba021a2ffc81] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004293687s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-zt6kz" [3b964b6d-a545-4c19-a009-44b0437ef6fe] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002958147s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-clxfv" [5d4c566c-673f-4229-a8cd-385ecc4682c6] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003946278s
addons_test.go:842: (dbg) Run:  kubectl --context addons-133000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-133000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-133000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a67c9125-b31f-4067-ad60-b3b3985417d9] Pending
helpers_test.go:344: "test-job-nginx-0" [a67c9125-b31f-4067-ad60-b3b3985417d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a67c9125-b31f-4067-ad60-b3b3985417d9] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003482983s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable volcano --alsologtostderr -v=1: (10.855146679s)
--- PASS: TestAddons/serial/Volcano (40.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-133000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-133000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-133000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-133000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8313c87-b643-419f-9bda-bc9d421672cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8313c87-b643-419f-9bda-bc9d421672cb] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004047904s
addons_test.go:633: (dbg) Run:  kubectl --context addons-133000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-133000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-133000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-133000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.682866ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-m55cg" [c192240f-4ceb-4a6b-9ac5-c2c1498bbbae] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005180959s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bk6pz" [e47cbe35-aa08-4386-8626-bd378982cffa] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003051013s
addons_test.go:331: (dbg) Run:  kubectl --context addons-133000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-133000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-133000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.485585728s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 ip
2024/11/05 09:47:44 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-133000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-133000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-133000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [16c70082-cf9f-4509-b686-604a165e6792] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [16c70082-cf9f-4509-b686-604a165e6792] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003979692s
I1105 09:48:55.567443   17842 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-133000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable ingress-dns --alsologtostderr -v=1: (1.227190534s)
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable ingress --alsologtostderr -v=1: (7.477296558s)
--- PASS: TestAddons/parallel/Ingress (17.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x78n4" [2a164cae-401e-4daa-a9d4-ef5bf1ca1f5a] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00711348s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.475058874s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.593688ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dv8bt" [4d76a5d0-2219-465b-a3f7-2355a1381194] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005489703s
addons_test.go:402: (dbg) Run:  kubectl --context addons-133000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.51s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1105 09:48:06.063932   17842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1105 09:48:06.067308   17842 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1105 09:48:06.067318   17842 kapi.go:107] duration metric: took 3.395739ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.401197ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-133000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-133000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ff79c843-df26-48df-b6f3-0ba8b202d805] Pending
helpers_test.go:344: "task-pv-pod" [ff79c843-df26-48df-b6f3-0ba8b202d805] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ff79c843-df26-48df-b6f3-0ba8b202d805] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.005559689s
addons_test.go:511: (dbg) Run:  kubectl --context addons-133000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-133000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-133000 delete pod task-pv-pod: (1.212821478s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-133000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-133000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-133000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d20fff48-912c-4346-9a5a-803a5ca8ab4b] Pending
helpers_test.go:344: "task-pv-pod-restore" [d20fff48-912c-4346-9a5a-803a5ca8ab4b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d20fff48-912c-4346-9a5a-803a5ca8ab4b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002662584s
addons_test.go:553: (dbg) Run:  kubectl --context addons-133000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-133000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-133000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.492670278s)
--- PASS: TestAddons/parallel/CSI (51.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-133000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qlq6c" [ad10cb43-1a28-4ddb-a02d-c2a0e5693630] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qlq6c" [ad10cb43-1a28-4ddb-a02d-c2a0e5693630] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004746081s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable headlamp --alsologtostderr -v=1: (5.545150722s)
--- PASS: TestAddons/parallel/Headlamp (17.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-b729n" [34446776-0cff-479c-a68d-d2f0cacdfba4] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003431017s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (44.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-133000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-133000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a939595e-5598-4d16-9a2b-d186648f0d59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a939595e-5598-4d16-9a2b-d186648f0d59] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a939595e-5598-4d16-9a2b-d186648f0d59] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002337963s
addons_test.go:906: (dbg) Run:  kubectl --context addons-133000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 ssh "cat /opt/local-path-provisioner/pvc-56b738e8-4800-4a2c-9f6f-fc64a1fef59f_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-133000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-133000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (34.623812694s)
--- PASS: TestAddons/parallel/LocalPath (44.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9c7x9" [318e29f1-8fda-4ada-8a20-2fcc0a7cfa91] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003746577s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-c5mvg" [4754e639-a3e8-49bb-87ca-493e4e31aa25] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004308s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-amd64 -p addons-133000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-amd64 -p addons-133000 addons disable yakd --alsologtostderr -v=1: (5.478619366s)
--- PASS: TestAddons/parallel/Yakd (10.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-133000
addons_test.go:170: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-133000: (5.402582959s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-133000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-133000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-133000
--- PASS: TestAddons/StoppedEnableDisable (6.01s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.29s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1105 10:41:31.848771   17842 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1105 10:41:31.848968   17842 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W1105 10:41:32.594866   17842 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1105 10:41:32.595091   17842 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1105 10:41:32.595149   17842 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit
I1105 10:41:33.086397   17842 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20] Decompressors:map[bz2:0xc0007d4ed0 gz:0xc0007d4ed8 tar:0xc0007d4e80 tar.bz2:0xc0007d4e90 tar.gz:0xc0007d4ea0 tar.xz:0xc0007d4eb0 tar.zst:0xc0007d4ec0 tbz2:0xc0007d4e90 tgz:0xc0007d4ea0 txz:0xc0007d4eb0 tzst:0xc0007d4ec0 xz:0xc0007d4ee0 zip:0xc0007d4ef0 zst:0xc0007d4ee8] Getters:map[file:0xc000835d70 http:0xc0008b3db0 https:0xc0008b3e00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1105 10:41:33.086431   17842 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit
I1105 10:41:35.912558   17842 install.go:79] stdout: 
W1105 10:41:35.912707   17842 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1105 10:41:35.912740   17842 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit]
I1105 10:41:35.934525   17842 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit]
I1105 10:41:35.955252   17842 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit]
I1105 10:41:35.975230   17842 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/001/docker-machine-driver-hyperkit]
I1105 10:41:36.014428   17842 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1105 10:41:36.014562   17842 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I1105 10:41:36.712589   17842 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1105 10:41:36.712620   17842 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1105 10:41:36.712695   17842 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1105 10:41:36.712729   17842 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit
I1105 10:41:37.102552   17842 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20 0xb324e20] Decompressors:map[bz2:0xc0007d4ed0 gz:0xc0007d4ed8 tar:0xc0007d4e80 tar.bz2:0xc0007d4e90 tar.gz:0xc0007d4ea0 tar.xz:0xc0007d4eb0 tar.zst:0xc0007d4ec0 tbz2:0xc0007d4e90 tgz:0xc0007d4ea0 txz:0xc0007d4eb0 tzst:0xc0007d4ec0 xz:0xc0007d4ee0 zip:0xc0007d4ef0 zst:0xc0007d4ee8] Getters:map[file:0xc000916290 http:0xc000715c20 https:0xc000715d10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: inval
id checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1105 10:41:37.102592   17842 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit
I1105 10:41:40.039546   17842 install.go:79] stdout: 
W1105 10:41:40.039683   17842 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1105 10:41:40.039741   17842 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit]
I1105 10:41:40.060370   17842 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit]
I1105 10:41:40.081200   17842 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit]
I1105 10:41:40.100529   17842 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate4222615740/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (8.29s)

                                                
                                    
x
+
TestErrorSpam/setup (37.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-641000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-641000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 --driver=hyperkit : (37.692094189s)
--- PASS: TestErrorSpam/setup (37.69s)

                                                
                                    
x
+
TestErrorSpam/start (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 start --dry-run
--- PASS: TestErrorSpam/start (1.75s)

                                                
                                    
x
+
TestErrorSpam/status (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 status
--- PASS: TestErrorSpam/status (0.57s)

                                                
                                    
x
+
TestErrorSpam/pause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 pause
--- PASS: TestErrorSpam/pause (1.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (155.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop: (5.452320951s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop: (1m15.225695567s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop
E1105 09:51:31.104698   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.112276   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.124512   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.148098   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.190227   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.273140   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.435374   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:31.758479   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:32.402039   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:33.685119   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:36.247199   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:41.369603   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:51:51.613078   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:52:12.095316   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-641000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-641000 stop: (1m15.21828887s)
--- PASS: TestErrorSpam/stop (155.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19910-17277/.minikube/files/etc/test/nested/copy/17842/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E1105 09:52:53.072044   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (52.792084094s)
--- PASS: TestFunctional/serial/StartWithProxy (52.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (66.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1105 09:53:23.591011   17842 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --alsologtostderr -v=8
E1105 09:54:15.004409   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --alsologtostderr -v=8: (1m6.432134699s)
functional_test.go:663: soft start took 1m6.432608167s for "functional-142000" cluster.
I1105 09:54:30.024140   17842 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (66.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-142000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.1: (1.145698384s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:3.3: (1.146439478s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 cache add registry.k8s.io/pause:latest: (1.086351785s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2350375714/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache add minikube-local-cache-test:functional-142000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache delete minikube-local-cache-test:functional-142000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-142000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (165.082816ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 kubectl -- --context functional-142000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 kubectl -- --context functional-142000 get pods: (1.146962483s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-142000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-142000 get pods: (1.739151384s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.74s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (282.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1105 09:56:31.128606   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
E1105 09:56:58.844805   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-142000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m42.749267431s)
functional_test.go:761: restart took 4m42.749365198s for "functional-142000" cluster.
I1105 09:59:22.199589   17842 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (282.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-142000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 logs: (2.059812282s)
--- PASS: TestFunctional/serial/LogsCmd (2.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2481516726/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2481516726/001/logs.txt: (2.273786863s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-142000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-142000: exit status 115 (296.825299ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:32135 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-142000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 config get cpus: exit status 14 (67.36642ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 config get cpus: exit status 14 (72.087079ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-142000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-142000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 19645: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (541.076626ms)

                                                
                                                
-- stdout --
	* [functional-142000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:00:29.812629   19572 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:00:29.812848   19572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:29.812854   19572 out.go:358] Setting ErrFile to fd 2...
	I1105 10:00:29.812857   19572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:29.813054   19572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:00:29.814482   19572 out.go:352] Setting JSON to false
	I1105 10:00:29.843080   19572 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7198,"bootTime":1730822431,"procs":582,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:00:29.843233   19572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:00:29.870652   19572 out.go:177] * [functional-142000] minikube v1.34.0 on Darwin 15.0.1
	I1105 10:00:29.912732   19572 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:00:29.912766   19572 notify.go:220] Checking for updates...
	I1105 10:00:29.957444   19572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:00:29.978621   19572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:00:29.999947   19572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:00:30.020490   19572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:30.041652   19572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:00:30.063683   19572 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:00:30.064404   19572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:30.064473   19572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:30.076560   19572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57451
	I1105 10:00:30.076892   19572 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:30.077307   19572 main.go:141] libmachine: Using API Version  1
	I1105 10:00:30.077320   19572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:30.077596   19572 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:30.077712   19572 main.go:141] libmachine: (functional-142000) Calling .DriverName
	I1105 10:00:30.077913   19572 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:00:30.078180   19572 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:30.078204   19572 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:30.089268   19572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57453
	I1105 10:00:30.089615   19572 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:30.090004   19572 main.go:141] libmachine: Using API Version  1
	I1105 10:00:30.090021   19572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:30.090241   19572 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:30.090358   19572 main.go:141] libmachine: (functional-142000) Calling .DriverName
	I1105 10:00:30.138559   19572 out.go:177] * Using the hyperkit driver based on existing profile
	I1105 10:00:30.159494   19572 start.go:297] selected driver: hyperkit
	I1105 10:00:30.159510   19572 start.go:901] validating driver "hyperkit" against &{Name:functional-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:00:30.159655   19572 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:00:30.201790   19572 out.go:201] 
	W1105 10:00:30.222460   19572 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1105 10:00:30.243728   19572 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-142000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (600.259437ms)

                                                
                                                
-- stdout --
	* [functional-142000] minikube v1.34.0 sur Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:00:29.201730   19557 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:00:29.201941   19557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:29.201947   19557 out.go:358] Setting ErrFile to fd 2...
	I1105 10:00:29.201950   19557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:00:29.202141   19557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:00:29.203761   19557 out.go:352] Setting JSON to false
	I1105 10:00:29.232278   19557 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7198,"bootTime":1730822431,"procs":576,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1105 10:00:29.232438   19557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1105 10:00:29.254375   19557 out.go:177] * [functional-142000] minikube v1.34.0 sur Darwin 15.0.1
	I1105 10:00:29.311784   19557 notify.go:220] Checking for updates...
	I1105 10:00:29.348987   19557 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 10:00:29.390604   19557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	I1105 10:00:29.411783   19557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1105 10:00:29.453660   19557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 10:00:29.474710   19557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	I1105 10:00:29.516626   19557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 10:00:29.538128   19557 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:00:29.538470   19557 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:29.538509   19557 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:29.549885   19557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57436
	I1105 10:00:29.550231   19557 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:29.550660   19557 main.go:141] libmachine: Using API Version  1
	I1105 10:00:29.550670   19557 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:29.550921   19557 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:29.551022   19557 main.go:141] libmachine: (functional-142000) Calling .DriverName
	I1105 10:00:29.551210   19557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 10:00:29.551483   19557 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:00:29.551512   19557 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:00:29.562724   19557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57441
	I1105 10:00:29.563038   19557 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:00:29.563397   19557 main.go:141] libmachine: Using API Version  1
	I1105 10:00:29.563412   19557 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:00:29.563621   19557 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:00:29.563721   19557 main.go:141] libmachine: (functional-142000) Calling .DriverName
	I1105 10:00:29.594681   19557 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1105 10:00:29.637129   19557 start.go:297] selected driver: hyperkit
	I1105 10:00:29.637160   19557 start.go:901] validating driver "hyperkit" against &{Name:functional-142000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19883/minikube-v1.34.0-1730282777-19883-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:functional-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 10:00:29.637376   19557 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 10:00:29.682745   19557 out.go:201] 
	W1105 10:00:29.703838   19557 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1105 10:00:29.724873   19557 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-142000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-142000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xnq8m" [c6abe739-138b-4af2-a19b-10c584bd2e1a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xnq8m" [c6abe739-138b-4af2-a19b-10c584bd2e1a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.007690384s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31252
functional_test.go:1675: http://192.169.0.4:31252: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-xnq8m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31252
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [77409d09-1803-4eee-b404-3388bacdc3c4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003321751s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-142000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-142000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [83d95496-fdf1-4256-8441-0c8a470ec735] Pending
helpers_test.go:344: "sp-pod" [83d95496-fdf1-4256-8441-0c8a470ec735] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [83d95496-fdf1-4256-8441-0c8a470ec735] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004241442s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-142000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-142000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fa1368c6-b0c4-4edf-b412-40e5d66aa2db] Pending
helpers_test.go:344: "sp-pod" [fa1368c6-b0c4-4edf-b412-40e5d66aa2db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fa1368c6-b0c4-4edf-b412-40e5d66aa2db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00360549s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-142000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp functional-142000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd1757359156/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -n functional-142000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-142000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-w4nm6" [769dbd89-7dba-42a4-b057-4609d53b0504] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-w4nm6" [769dbd89-7dba-42a4-b057-4609d53b0504] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003075614s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-142000 exec mysql-6cdb49bbb-w4nm6 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-142000 exec mysql-6cdb49bbb-w4nm6 -- mysql -ppassword -e "show databases;": exit status 1 (137.874455ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1105 09:59:56.137064   17842 retry.go:31] will retry after 1.274502989s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-142000 exec mysql-6cdb49bbb-w4nm6 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-142000 exec mysql-6cdb49bbb-w4nm6 -- mysql -ppassword -e "show databases;": exit status 1 (110.763626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1105 09:59:57.523690   17842 retry.go:31] will retry after 2.180167402s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-142000 exec mysql-6cdb49bbb-w4nm6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17842/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/test/nested/copy/17842/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17842.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/17842.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17842.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /usr/share/ca-certificates/17842.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/178422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/178422.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/178422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /usr/share/ca-certificates/178422.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-142000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "sudo systemctl is-active crio": exit status 1 (148.900085ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-142000
docker.io/kicbase/echo-server:functional-142000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format short --alsologtostderr:
I1105 10:00:33.425844   19658 out.go:345] Setting OutFile to fd 1 ...
I1105 10:00:33.426189   19658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.426195   19658 out.go:358] Setting ErrFile to fd 2...
I1105 10:00:33.426199   19658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.426388   19658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:00:33.427009   19658 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.427104   19658 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.427449   19658 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.427496   19658 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.438576   19658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57564
I1105 10:00:33.439037   19658 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.439486   19658 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.439518   19658 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.439758   19658 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.439865   19658 main.go:141] libmachine: (functional-142000) Calling .GetState
I1105 10:00:33.439945   19658 main.go:141] libmachine: (functional-142000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:00:33.440012   19658 main.go:141] libmachine: (functional-142000) DBG | hyperkit pid from json: 18517
I1105 10:00:33.441691   19658 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.441715   19658 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.452727   19658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57566
I1105 10:00:33.453054   19658 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.453396   19658 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.453404   19658 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.453654   19658 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.453755   19658 main.go:141] libmachine: (functional-142000) Calling .DriverName
I1105 10:00:33.453937   19658 ssh_runner.go:195] Run: systemctl --version
I1105 10:00:33.453958   19658 main.go:141] libmachine: (functional-142000) Calling .GetSSHHostname
I1105 10:00:33.454047   19658 main.go:141] libmachine: (functional-142000) Calling .GetSSHPort
I1105 10:00:33.454124   19658 main.go:141] libmachine: (functional-142000) Calling .GetSSHKeyPath
I1105 10:00:33.454197   19658 main.go:141] libmachine: (functional-142000) Calling .GetSSHUsername
I1105 10:00:33.454310   19658 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/functional-142000/id_rsa Username:docker}
I1105 10:00:33.487081   19658 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1105 10:00:33.505199   19658 main.go:141] libmachine: Making call to close driver server
I1105 10:00:33.505207   19658 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:33.505363   19658 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:33.505373   19658 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:33.505378   19658 main.go:141] libmachine: Making call to close driver server
I1105 10:00:33.505378   19658 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
I1105 10:00:33.505396   19658 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:33.505597   19658 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
I1105 10:00:33.505617   19658 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:33.505643   19658 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-142000 | d7dc824165a8e | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.2           | 847c7bc1a5418 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 505d571f5fd56 | 91.5MB |
| docker.io/library/nginx                     | alpine            | cb8f91112b6b5 | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.31.2           | 9499c9960544e | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 0486b6c53a1b5 | 88.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kicbase/echo-server               | functional-142000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| localhost/my-image                          | functional-142000 | 1432622347887 | 1.24MB |
| docker.io/library/nginx                     | latest            | 3b25b682ea82b | 192MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format table --alsologtostderr:
I1105 10:00:36.099858   19685 out.go:345] Setting OutFile to fd 1 ...
I1105 10:00:36.100669   19685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:36.100677   19685 out.go:358] Setting ErrFile to fd 2...
I1105 10:00:36.100683   19685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:36.100969   19685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:00:36.101862   19685 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:36.101962   19685 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:36.102292   19685 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:36.102336   19685 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:36.113627   19685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57600
I1105 10:00:36.114096   19685 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:36.114547   19685 main.go:141] libmachine: Using API Version  1
I1105 10:00:36.114557   19685 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:36.114775   19685 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:36.114877   19685 main.go:141] libmachine: (functional-142000) Calling .GetState
I1105 10:00:36.114968   19685 main.go:141] libmachine: (functional-142000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:00:36.115042   19685 main.go:141] libmachine: (functional-142000) DBG | hyperkit pid from json: 18517
I1105 10:00:36.116598   19685 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:36.116618   19685 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:36.127846   19685 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57602
I1105 10:00:36.128174   19685 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:36.128518   19685 main.go:141] libmachine: Using API Version  1
I1105 10:00:36.128531   19685 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:36.128741   19685 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:36.128832   19685 main.go:141] libmachine: (functional-142000) Calling .DriverName
I1105 10:00:36.129010   19685 ssh_runner.go:195] Run: systemctl --version
I1105 10:00:36.129028   19685 main.go:141] libmachine: (functional-142000) Calling .GetSSHHostname
I1105 10:00:36.129112   19685 main.go:141] libmachine: (functional-142000) Calling .GetSSHPort
I1105 10:00:36.129184   19685 main.go:141] libmachine: (functional-142000) Calling .GetSSHKeyPath
I1105 10:00:36.129271   19685 main.go:141] libmachine: (functional-142000) Calling .GetSSHUsername
I1105 10:00:36.129360   19685 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/functional-142000/id_rsa Username:docker}
I1105 10:00:36.163026   19685 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1105 10:00:36.189811   19685 main.go:141] libmachine: Making call to close driver server
I1105 10:00:36.189821   19685 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:36.189975   19685 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:36.189983   19685 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:36.189989   19685 main.go:141] libmachine: Making call to close driver server
I1105 10:00:36.189995   19685 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:36.190115   19685 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:36.190126   19685 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:36.190134   19685 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
2024/11/05 10:00:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr:
[{"id":"143262234788744ce6a497b134d8b250328ba9e7ad5da9ee631e79f5dcfb291f","repoDigests":[],"repoTags":["localhost/my-image:functional-142000"],"size":"1240000"},{"id":"d7dc824165a8e9c04d4b8fbce868753689a7c9bcbadf68b2d9f7f2329844d16c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-142000"],"size":"30"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"847c7bc1a541865e150af08
318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67400000"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-142000"],"size":"4940000"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"94200000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDig
ests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format json --alsologtostderr:
I1105 10:00:35.919382   19681 out.go:345] Setting OutFile to fd 1 ...
I1105 10:00:35.919717   19681 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:35.919723   19681 out.go:358] Setting ErrFile to fd 2...
I1105 10:00:35.919727   19681 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:35.919918   19681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:00:35.920596   19681 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:35.920699   19681 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:35.921073   19681 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:35.921114   19681 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:35.932341   19681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57595
I1105 10:00:35.932729   19681 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:35.933173   19681 main.go:141] libmachine: Using API Version  1
I1105 10:00:35.933204   19681 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:35.933460   19681 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:35.933576   19681 main.go:141] libmachine: (functional-142000) Calling .GetState
I1105 10:00:35.933683   19681 main.go:141] libmachine: (functional-142000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:00:35.933743   19681 main.go:141] libmachine: (functional-142000) DBG | hyperkit pid from json: 18517
I1105 10:00:35.935327   19681 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:35.935355   19681 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:35.946700   19681 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57597
I1105 10:00:35.947034   19681 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:35.947393   19681 main.go:141] libmachine: Using API Version  1
I1105 10:00:35.947406   19681 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:35.947645   19681 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:35.947753   19681 main.go:141] libmachine: (functional-142000) Calling .DriverName
I1105 10:00:35.947934   19681 ssh_runner.go:195] Run: systemctl --version
I1105 10:00:35.947953   19681 main.go:141] libmachine: (functional-142000) Calling .GetSSHHostname
I1105 10:00:35.948028   19681 main.go:141] libmachine: (functional-142000) Calling .GetSSHPort
I1105 10:00:35.948115   19681 main.go:141] libmachine: (functional-142000) Calling .GetSSHKeyPath
I1105 10:00:35.948201   19681 main.go:141] libmachine: (functional-142000) Calling .GetSSHUsername
I1105 10:00:35.948291   19681 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/functional-142000/id_rsa Username:docker}
I1105 10:00:35.982813   19681 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1105 10:00:36.007976   19681 main.go:141] libmachine: Making call to close driver server
I1105 10:00:36.007985   19681 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:36.008130   19681 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:36.008139   19681 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:36.008144   19681 main.go:141] libmachine: Making call to close driver server
I1105 10:00:36.008148   19681 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:36.008267   19681 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:36.008277   19681 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:36.008328   19681 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr:
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "94200000"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d7dc824165a8e9c04d4b8fbce868753689a7c9bcbadf68b2d9f7f2329844d16c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-142000
size: "30"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67400000"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "88400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-142000
size: "4940000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image ls --format yaml --alsologtostderr:
I1105 10:00:33.599264   19663 out.go:345] Setting OutFile to fd 1 ...
I1105 10:00:33.600033   19663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.600042   19663 out.go:358] Setting ErrFile to fd 2...
I1105 10:00:33.600048   19663 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.600554   19663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:00:33.601198   19663 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.601292   19663 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.601641   19663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.601684   19663 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.612475   19663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57569
I1105 10:00:33.612903   19663 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.613316   19663 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.613326   19663 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.613555   19663 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.613685   19663 main.go:141] libmachine: (functional-142000) Calling .GetState
I1105 10:00:33.613793   19663 main.go:141] libmachine: (functional-142000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:00:33.613867   19663 main.go:141] libmachine: (functional-142000) DBG | hyperkit pid from json: 18517
I1105 10:00:33.615381   19663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.615403   19663 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.626342   19663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57571
I1105 10:00:33.626693   19663 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.627107   19663 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.627127   19663 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.627365   19663 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.627483   19663 main.go:141] libmachine: (functional-142000) Calling .DriverName
I1105 10:00:33.627667   19663 ssh_runner.go:195] Run: systemctl --version
I1105 10:00:33.627686   19663 main.go:141] libmachine: (functional-142000) Calling .GetSSHHostname
I1105 10:00:33.627768   19663 main.go:141] libmachine: (functional-142000) Calling .GetSSHPort
I1105 10:00:33.627850   19663 main.go:141] libmachine: (functional-142000) Calling .GetSSHKeyPath
I1105 10:00:33.627931   19663 main.go:141] libmachine: (functional-142000) Calling .GetSSHUsername
I1105 10:00:33.628031   19663 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/functional-142000/id_rsa Username:docker}
I1105 10:00:33.660589   19663 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1105 10:00:33.677127   19663 main.go:141] libmachine: Making call to close driver server
I1105 10:00:33.677136   19663 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:33.677296   19663 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:33.677305   19663 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:33.677331   19663 main.go:141] libmachine: Making call to close driver server
I1105 10:00:33.677344   19663 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:33.677345   19663 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
I1105 10:00:33.677458   19663 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:33.677468   19663 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:33.677480   19663 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh pgrep buildkitd: exit status 1 (145.773171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr: (1.832127584s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-142000 image build -t localhost/my-image:functional-142000 testdata/build --alsologtostderr:
I1105 10:00:33.915391   19672 out.go:345] Setting OutFile to fd 1 ...
I1105 10:00:33.916387   19672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.916394   19672 out.go:358] Setting ErrFile to fd 2...
I1105 10:00:33.916398   19672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 10:00:33.916582   19672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
I1105 10:00:33.917287   19672 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.918007   19672 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1105 10:00:33.918414   19672 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.918447   19672 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.929265   19672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57582
I1105 10:00:33.929655   19672 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.930099   19672 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.930110   19672 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.930393   19672 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.930514   19672 main.go:141] libmachine: (functional-142000) Calling .GetState
I1105 10:00:33.930627   19672 main.go:141] libmachine: (functional-142000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1105 10:00:33.930684   19672 main.go:141] libmachine: (functional-142000) DBG | hyperkit pid from json: 18517
I1105 10:00:33.932264   19672 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1105 10:00:33.932288   19672 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1105 10:00:33.943134   19672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57584
I1105 10:00:33.943478   19672 main.go:141] libmachine: () Calling .GetVersion
I1105 10:00:33.943857   19672 main.go:141] libmachine: Using API Version  1
I1105 10:00:33.943872   19672 main.go:141] libmachine: () Calling .SetConfigRaw
I1105 10:00:33.944125   19672 main.go:141] libmachine: () Calling .GetMachineName
I1105 10:00:33.944251   19672 main.go:141] libmachine: (functional-142000) Calling .DriverName
I1105 10:00:33.944443   19672 ssh_runner.go:195] Run: systemctl --version
I1105 10:00:33.944465   19672 main.go:141] libmachine: (functional-142000) Calling .GetSSHHostname
I1105 10:00:33.944562   19672 main.go:141] libmachine: (functional-142000) Calling .GetSSHPort
I1105 10:00:33.944655   19672 main.go:141] libmachine: (functional-142000) Calling .GetSSHKeyPath
I1105 10:00:33.944775   19672 main.go:141] libmachine: (functional-142000) Calling .GetSSHUsername
I1105 10:00:33.944861   19672 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/functional-142000/id_rsa Username:docker}
I1105 10:00:33.978873   19672 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.654881130.tar
I1105 10:00:33.978956   19672 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1105 10:00:33.987607   19672 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.654881130.tar
I1105 10:00:33.992052   19672 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.654881130.tar: stat -c "%s %y" /var/lib/minikube/build/build.654881130.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.654881130.tar': No such file or directory
I1105 10:00:33.992083   19672 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.654881130.tar --> /var/lib/minikube/build/build.654881130.tar (3072 bytes)
I1105 10:00:34.012409   19672 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.654881130
I1105 10:00:34.020851   19672 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.654881130 -xf /var/lib/minikube/build/build.654881130.tar
I1105 10:00:34.029034   19672 docker.go:360] Building image: /var/lib/minikube/build/build.654881130
I1105 10:00:34.029112   19672 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-142000 /var/lib/minikube/build/build.654881130
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:143262234788744ce6a497b134d8b250328ba9e7ad5da9ee631e79f5dcfb291f done
#8 naming to localhost/my-image:functional-142000 done
#8 DONE 0.0s
I1105 10:00:35.636181   19672 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-142000 /var/lib/minikube/build/build.654881130: (1.607071197s)
I1105 10:00:35.636261   19672 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.654881130
I1105 10:00:35.644773   19672 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.654881130.tar
I1105 10:00:35.653150   19672 build_images.go:217] Built localhost/my-image:functional-142000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.654881130.tar
I1105 10:00:35.653183   19672 build_images.go:133] succeeded building to: functional-142000
I1105 10:00:35.653188   19672 build_images.go:134] failed building to: 
I1105 10:00:35.653223   19672 main.go:141] libmachine: Making call to close driver server
I1105 10:00:35.653236   19672 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:35.653390   19672 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
I1105 10:00:35.653398   19672 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:35.653409   19672 main.go:141] libmachine: Making call to close connection to plugin binary
I1105 10:00:35.653416   19672 main.go:141] libmachine: Making call to close driver server
I1105 10:00:35.653421   19672 main.go:141] libmachine: (functional-142000) Calling .Close
I1105 10:00:35.653569   19672 main.go:141] libmachine: (functional-142000) DBG | Closing plugin on server side
I1105 10:00:35.653579   19672 main.go:141] libmachine: Successfully made call to close driver server
I1105 10:00:35.653585   19672 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.69883468s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-142000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-142000 docker-env) && out/minikube-darwin-amd64 status -p functional-142000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-142000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon kicbase/echo-server:functional-142000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon kicbase/echo-server:functional-142000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-142000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load --daemon kicbase/echo-server:functional-142000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image save kicbase/echo-server:functional-142000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image rm kicbase/echo-server:functional-142000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-142000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 image save --daemon kicbase/echo-server:functional-142000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-142000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 19047: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-142000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8a9e62bc-d5f4-478a-8fed-3a88d80deb97] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8a9e62bc-d5f4-478a-8fed-3a88d80deb97] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.005644147s
I1105 09:59:58.596796   17842 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-142000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.194.133 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1105 09:59:58.697537   17842 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1105 09:59:58.775863   17842 config.go:182] Loaded profile config "functional-142000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-142000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-142000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-142000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-jn52q" [f48eb87e-6021-42b1-8878-895d4dc9d79f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-jn52q" [f48eb87e-6021-42b1-8878-895d4dc9d79f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005161306s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service list -o json
functional_test.go:1494: Took "791.432206ms" to run "out/minikube-darwin-amd64 -p functional-142000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:31515
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:31515
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "227.084781ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "90.713544ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "227.79248ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "94.709261ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port887381946/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730829622634879000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port887381946/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730829622634879000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port887381946/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730829622634879000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port887381946/001/test-1730829622634879000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (164.979518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 10:00:22.800689   17842 retry.go:31] will retry after 285.27606ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 test-1730829622634879000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh cat /mount-9p/test-1730829622634879000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-142000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [55c27fd3-1920-4410-af84-f002017221a4] Pending
helpers_test.go:344: "busybox-mount" [55c27fd3-1920-4410-af84-f002017221a4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [55c27fd3-1920-4410-af84-f002017221a4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [55c27fd3-1920-4410-af84-f002017221a4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003489067s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-142000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port887381946/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3330465920/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (171.735561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 10:00:28.789067   17842 retry.go:31] will retry after 632.381808ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3330465920/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "sudo umount -f /mount-9p": exit status 1 (188.829029ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-142000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3330465920/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1: exit status 1 (177.080684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 10:00:30.686199   17842 retry.go:31] will retry after 663.531105ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-142000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-142000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-142000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup853671574/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-142000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-142000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-142000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (222.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-213000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E1105 10:01:31.125212   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-213000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m41.718048266s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (222.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- rollout status deployment/busybox
E1105 10:04:33.993557   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:33.999952   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:34.011605   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:34.033942   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-213000 -- rollout status deployment/busybox: (3.150667546s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- get pods -o jsonpath='{.items[*].status.podIP}'
E1105 10:04:34.077381   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:34.158742   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- get pods -o jsonpath='{.items[*].metadata.name}'
E1105 10:04:34.320791   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-89r49 -- nslookup kubernetes.io
E1105 10:04:34.643310   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-q5j74 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-x9hwg -- nslookup kubernetes.io
E1105 10:04:35.286028   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-89r49 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-q5j74 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-x9hwg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-89r49 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-q5j74 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-x9hwg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- get pods -o jsonpath='{.items[*].metadata.name}'
E1105 10:04:36.567307   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-89r49 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-89r49 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-q5j74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-q5j74 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-x9hwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-213000 -- exec busybox-7dff88458-x9hwg -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-213000 -v=7 --alsologtostderr
E1105 10:04:39.129369   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:44.250570   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:04:54.492823   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:05:14.974274   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-213000 -v=7 --alsologtostderr: (49.792323795s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-213000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp testdata/cp-test.txt ha-213000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000:/home/docker/cp-test.txt ha-213000-m02:/home/docker/cp-test_ha-213000_ha-213000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test_ha-213000_ha-213000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000:/home/docker/cp-test.txt ha-213000-m03:/home/docker/cp-test_ha-213000_ha-213000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test_ha-213000_ha-213000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000:/home/docker/cp-test.txt ha-213000-m04:/home/docker/cp-test_ha-213000_ha-213000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test_ha-213000_ha-213000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp testdata/cp-test.txt ha-213000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m02:/home/docker/cp-test.txt ha-213000:/home/docker/cp-test_ha-213000-m02_ha-213000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test_ha-213000-m02_ha-213000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m02:/home/docker/cp-test.txt ha-213000-m03:/home/docker/cp-test_ha-213000-m02_ha-213000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test_ha-213000-m02_ha-213000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m02:/home/docker/cp-test.txt ha-213000-m04:/home/docker/cp-test_ha-213000-m02_ha-213000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test_ha-213000-m02_ha-213000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp testdata/cp-test.txt ha-213000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt ha-213000:/home/docker/cp-test_ha-213000-m03_ha-213000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test_ha-213000-m03_ha-213000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt ha-213000-m02:/home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test_ha-213000-m03_ha-213000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m03:/home/docker/cp-test.txt ha-213000-m04:/home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test_ha-213000-m03_ha-213000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp testdata/cp-test.txt ha-213000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile1308940127/001/cp-test_ha-213000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt ha-213000:/home/docker/cp-test_ha-213000-m04_ha-213000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000 "sudo cat /home/docker/cp-test_ha-213000-m04_ha-213000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt ha-213000-m02:/home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m02 "sudo cat /home/docker/cp-test_ha-213000-m04_ha-213000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 cp ha-213000-m04:/home/docker/cp-test.txt ha-213000-m03:/home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 ssh -n ha-213000-m03 "sudo cat /home/docker/cp-test_ha-213000-m04_ha-213000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 node stop m02 -v=7 --alsologtostderr: (8.389781968s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 7 (389.188714ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-213000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-213000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:05:47.742007   20231 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:05:47.742350   20231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:05:47.742356   20231 out.go:358] Setting ErrFile to fd 2...
	I1105 10:05:47.742360   20231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:05:47.742538   20231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:05:47.742706   20231 out.go:352] Setting JSON to false
	I1105 10:05:47.742730   20231 mustload.go:65] Loading cluster: ha-213000
	I1105 10:05:47.742763   20231 notify.go:220] Checking for updates...
	I1105 10:05:47.743074   20231 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:05:47.743098   20231 status.go:174] checking status of ha-213000 ...
	I1105 10:05:47.743534   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.743563   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.754894   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58347
	I1105 10:05:47.755208   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.755601   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.755609   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.755867   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.755978   20231 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:05:47.756096   20231 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:47.756176   20231 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 19716
	I1105 10:05:47.757345   20231 status.go:371] ha-213000 host status = "Running" (err=<nil>)
	I1105 10:05:47.757364   20231 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:05:47.757641   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.757680   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.768625   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58349
	I1105 10:05:47.768936   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.769293   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.769304   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.769541   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.769638   20231 main.go:141] libmachine: (ha-213000) Calling .GetIP
	I1105 10:05:47.769746   20231 host.go:66] Checking if "ha-213000" exists ...
	I1105 10:05:47.770010   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.770047   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.780985   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58351
	I1105 10:05:47.781287   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.781623   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.781642   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.781848   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.781950   20231 main.go:141] libmachine: (ha-213000) Calling .DriverName
	I1105 10:05:47.782117   20231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:05:47.782142   20231 main.go:141] libmachine: (ha-213000) Calling .GetSSHHostname
	I1105 10:05:47.782222   20231 main.go:141] libmachine: (ha-213000) Calling .GetSSHPort
	I1105 10:05:47.782304   20231 main.go:141] libmachine: (ha-213000) Calling .GetSSHKeyPath
	I1105 10:05:47.782389   20231 main.go:141] libmachine: (ha-213000) Calling .GetSSHUsername
	I1105 10:05:47.782478   20231 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000/id_rsa Username:docker}
	I1105 10:05:47.816959   20231 ssh_runner.go:195] Run: systemctl --version
	I1105 10:05:47.821188   20231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:05:47.833255   20231 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:05:47.833280   20231 api_server.go:166] Checking apiserver status ...
	I1105 10:05:47.833333   20231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:05:47.845648   20231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup
	W1105 10:05:47.853814   20231 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1996/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:05:47.853890   20231 ssh_runner.go:195] Run: ls
	I1105 10:05:47.856971   20231 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:05:47.861312   20231 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:05:47.861325   20231 status.go:463] ha-213000 apiserver status = Running (err=<nil>)
	I1105 10:05:47.861348   20231 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:05:47.861361   20231 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:05:47.861633   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.861654   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.872662   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58355
	I1105 10:05:47.872983   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.873327   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.873343   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.873548   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.873651   20231 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:05:47.873762   20231 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:47.873825   20231 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 19738
	I1105 10:05:47.874977   20231 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 19738 missing from process table
	I1105 10:05:47.875031   20231 status.go:371] ha-213000-m02 host status = "Stopped" (err=<nil>)
	I1105 10:05:47.875039   20231 status.go:384] host is not running, skipping remaining checks
	I1105 10:05:47.875045   20231 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:05:47.875066   20231 status.go:174] checking status of ha-213000-m03 ...
	I1105 10:05:47.875346   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.875372   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.886420   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58357
	I1105 10:05:47.886752   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.887087   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.887110   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.887322   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.887421   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetState
	I1105 10:05:47.887518   20231 main.go:141] libmachine: (ha-213000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:47.887612   20231 main.go:141] libmachine: (ha-213000-m03) DBG | hyperkit pid from json: 19776
	I1105 10:05:47.888809   20231 status.go:371] ha-213000-m03 host status = "Running" (err=<nil>)
	I1105 10:05:47.888818   20231 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:05:47.889066   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.889087   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.900149   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58359
	I1105 10:05:47.900497   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.900859   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.900882   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.901091   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.901214   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetIP
	I1105 10:05:47.901330   20231 host.go:66] Checking if "ha-213000-m03" exists ...
	I1105 10:05:47.901599   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.901621   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.912570   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58361
	I1105 10:05:47.912975   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.913344   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.913358   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.913564   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.913666   20231 main.go:141] libmachine: (ha-213000-m03) Calling .DriverName
	I1105 10:05:47.913811   20231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:05:47.913823   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHHostname
	I1105 10:05:47.913907   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHPort
	I1105 10:05:47.913984   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHKeyPath
	I1105 10:05:47.914062   20231 main.go:141] libmachine: (ha-213000-m03) Calling .GetSSHUsername
	I1105 10:05:47.914146   20231 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m03/id_rsa Username:docker}
	I1105 10:05:47.943375   20231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:05:47.955055   20231 kubeconfig.go:125] found "ha-213000" server: "https://192.169.0.254:8443"
	I1105 10:05:47.955070   20231 api_server.go:166] Checking apiserver status ...
	I1105 10:05:47.955120   20231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:05:47.966492   20231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup
	W1105 10:05:47.974730   20231 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1852/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:05:47.974800   20231 ssh_runner.go:195] Run: ls
	I1105 10:05:47.978098   20231 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I1105 10:05:47.981895   20231 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I1105 10:05:47.981907   20231 status.go:463] ha-213000-m03 apiserver status = Running (err=<nil>)
	I1105 10:05:47.981913   20231 status.go:176] ha-213000-m03 status: &{Name:ha-213000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:05:47.981926   20231 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:05:47.982210   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.982232   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:47.993354   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58365
	I1105 10:05:47.993703   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:47.994050   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:47.994063   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:47.994266   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:47.994367   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:05:47.994459   20231 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:05:47.994547   20231 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 19891
	I1105 10:05:47.995723   20231 status.go:371] ha-213000-m04 host status = "Running" (err=<nil>)
	I1105 10:05:47.995733   20231 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:05:47.995997   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:47.996020   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:48.006810   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58367
	I1105 10:05:48.007126   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:48.007470   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:48.007488   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:48.007692   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:48.007793   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetIP
	I1105 10:05:48.007887   20231 host.go:66] Checking if "ha-213000-m04" exists ...
	I1105 10:05:48.008132   20231 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:05:48.008152   20231 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:05:48.019060   20231 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58369
	I1105 10:05:48.019396   20231 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:05:48.019742   20231 main.go:141] libmachine: Using API Version  1
	I1105 10:05:48.019760   20231 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:05:48.019969   20231 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:05:48.020068   20231 main.go:141] libmachine: (ha-213000-m04) Calling .DriverName
	I1105 10:05:48.020232   20231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:05:48.020244   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHHostname
	I1105 10:05:48.020324   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHPort
	I1105 10:05:48.020419   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHKeyPath
	I1105 10:05:48.020504   20231 main.go:141] libmachine: (ha-213000-m04) Calling .GetSSHUsername
	I1105 10:05:48.020585   20231 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/ha-213000-m04/id_rsa Username:docker}
	I1105 10:05:48.050283   20231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:05:48.061422   20231 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (225.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-213000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-213000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-213000 -v=7 --alsologtostderr: (27.154625734s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-213000 --wait=true -v=7 --alsologtostderr
E1105 10:09:33.992972   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:10:01.699829   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:11:31.119154   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-darwin-amd64 start -p ha-213000 --wait=true -v=7 --alsologtostderr: (3m18.356880748s)
ha_test.go:474: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-213000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (225.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 node delete m03 -v=7 --alsologtostderr: (7.186236867s)
ha_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-amd64 -p ha-213000 stop -v=7 --alsologtostderr: (24.887298872s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-213000 status -v=7 --alsologtostderr: exit status 7 (114.801363ms)

                                                
                                                
-- stdout --
	ha-213000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-213000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-213000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:12:21.374285   20645 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:12:21.374614   20645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.374620   20645 out.go:358] Setting ErrFile to fd 2...
	I1105 10:12:21.374624   20645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:12:21.374813   20645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:12:21.374989   20645 out.go:352] Setting JSON to false
	I1105 10:12:21.375010   20645 mustload.go:65] Loading cluster: ha-213000
	I1105 10:12:21.375066   20645 notify.go:220] Checking for updates...
	I1105 10:12:21.375331   20645 config.go:182] Loaded profile config "ha-213000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:12:21.375353   20645 status.go:174] checking status of ha-213000 ...
	I1105 10:12:21.375782   20645 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.375833   20645 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.387284   20645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59000
	I1105 10:12:21.387601   20645 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.387993   20645 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.388002   20645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.388211   20645 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.388327   20645 main.go:141] libmachine: (ha-213000) Calling .GetState
	I1105 10:12:21.388500   20645 main.go:141] libmachine: (ha-213000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.388526   20645 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid from json: 20508
	I1105 10:12:21.389593   20645 main.go:141] libmachine: (ha-213000) DBG | hyperkit pid 20508 missing from process table
	I1105 10:12:21.389641   20645 status.go:371] ha-213000 host status = "Stopped" (err=<nil>)
	I1105 10:12:21.389652   20645 status.go:384] host is not running, skipping remaining checks
	I1105 10:12:21.389657   20645 status.go:176] ha-213000 status: &{Name:ha-213000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:12:21.389682   20645 status.go:174] checking status of ha-213000-m02 ...
	I1105 10:12:21.389946   20645 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.389972   20645 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.404498   20645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59002
	I1105 10:12:21.404879   20645 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.405234   20645 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.405254   20645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.405473   20645 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.405570   20645 main.go:141] libmachine: (ha-213000-m02) Calling .GetState
	I1105 10:12:21.405680   20645 main.go:141] libmachine: (ha-213000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.405743   20645 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid from json: 20524
	I1105 10:12:21.406866   20645 main.go:141] libmachine: (ha-213000-m02) DBG | hyperkit pid 20524 missing from process table
	I1105 10:12:21.406910   20645 status.go:371] ha-213000-m02 host status = "Stopped" (err=<nil>)
	I1105 10:12:21.406917   20645 status.go:384] host is not running, skipping remaining checks
	I1105 10:12:21.406921   20645 status.go:176] ha-213000-m02 status: &{Name:ha-213000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:12:21.406930   20645 status.go:174] checking status of ha-213000-m04 ...
	I1105 10:12:21.407183   20645 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:12:21.407208   20645 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:12:21.418171   20645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59004
	I1105 10:12:21.418456   20645 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:12:21.418768   20645 main.go:141] libmachine: Using API Version  1
	I1105 10:12:21.418777   20645 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:12:21.418987   20645 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:12:21.419085   20645 main.go:141] libmachine: (ha-213000-m04) Calling .GetState
	I1105 10:12:21.419171   20645 main.go:141] libmachine: (ha-213000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:12:21.419250   20645 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid from json: 20571
	I1105 10:12:21.420319   20645 main.go:141] libmachine: (ha-213000-m04) DBG | hyperkit pid 20571 missing from process table
	I1105 10:12:21.420354   20645 status.go:371] ha-213000-m04 host status = "Stopped" (err=<nil>)
	I1105 10:12:21.420365   20645 status.go:384] host is not running, skipping remaining checks
	I1105 10:12:21.420369   20645 status.go:176] ha-213000-m04 status: &{Name:ha-213000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.00s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-322000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-322000 --driver=hyperkit : (37.673537564s)
--- PASS: TestImageBuild/serial/Setup (37.67s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-322000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-322000: (1.64532415s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-322000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-322000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-322000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-954000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-954000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m14.079476706s)
--- PASS: TestJSONOutput/start/Command (74.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-954000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-954000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-954000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-954000 --output=json --user=testUser: (8.323702457s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.63s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-894000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-894000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (378.132346ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"437ae713-3648-4a43-8386-a30bb5991327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-894000] minikube v1.34.0 on Darwin 15.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"979b83b7-b440-47b9-aecf-679e989b835d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19910"}}
	{"specversion":"1.0","id":"6b38ee71-0909-4529-bb84-3b0cb1283c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig"}}
	{"specversion":"1.0","id":"3bb0c6b4-5d56-4bd7-a246-ba2b8f83225b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"232e92c5-ea94-4773-a08a-c66f9474f629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8aa3e180-6d34-4ff6-b445-01853d5a112b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube"}}
	{"specversion":"1.0","id":"2eae7b10-e4b4-4de3-9a2d-777867f6443a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5beee259-647a-46da-acde-6e955757a2df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-894000
--- PASS: TestErrorJSONOutput (0.63s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (104.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-798000 --driver=hyperkit 
E1105 10:19:34.026865   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-798000 --driver=hyperkit : (54.407234558s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-810000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-810000 --driver=hyperkit : (38.00805444s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-798000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-810000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-810000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-810000: (5.276899018s)
helpers_test.go:175: Cleaning up "first-798000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-798000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-798000: (5.355416852s)
--- PASS: TestMinikubeProfile (104.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-193000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1105 10:24:34.026551   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:24:34.239205   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-193000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m48.44932773s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-193000 -- rollout status deployment/busybox: (2.499849127s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-h6v2c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-t279k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-h6v2c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-t279k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-h6v2c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-t279k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-h6v2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-h6v2c -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-t279k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-193000 -- exec busybox-7dff88458-t279k -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-193000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-193000 -v 3 --alsologtostderr: (44.804357032s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-193000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp testdata/cp-test.txt multinode-193000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1255497622/001/cp-test_multinode-193000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000:/home/docker/cp-test.txt multinode-193000-m02:/home/docker/cp-test_multinode-193000_multinode-193000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test_multinode-193000_multinode-193000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000:/home/docker/cp-test.txt multinode-193000-m03:/home/docker/cp-test_multinode-193000_multinode-193000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test_multinode-193000_multinode-193000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp testdata/cp-test.txt multinode-193000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1255497622/001/cp-test_multinode-193000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m02:/home/docker/cp-test.txt multinode-193000:/home/docker/cp-test_multinode-193000-m02_multinode-193000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test_multinode-193000-m02_multinode-193000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m02:/home/docker/cp-test.txt multinode-193000-m03:/home/docker/cp-test_multinode-193000-m02_multinode-193000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test_multinode-193000-m02_multinode-193000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp testdata/cp-test.txt multinode-193000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1255497622/001/cp-test_multinode-193000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m03:/home/docker/cp-test.txt multinode-193000:/home/docker/cp-test_multinode-193000-m03_multinode-193000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000 "sudo cat /home/docker/cp-test_multinode-193000-m03_multinode-193000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 cp multinode-193000-m03:/home/docker/cp-test.txt multinode-193000-m02:/home/docker/cp-test_multinode-193000-m03_multinode-193000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 ssh -n multinode-193000-m02 "sudo cat /home/docker/cp-test_multinode-193000-m03_multinode-193000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-193000 node stop m03: (2.36179057s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-193000 status: exit status 7 (281.047295ms)

                                                
                                                
-- stdout --
	multinode-193000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr: exit status 7 (281.645195ms)

                                                
                                                
-- stdout --
	multinode-193000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-193000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-193000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:25:56.305291   21594 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:25:56.305514   21594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:25:56.305520   21594 out.go:358] Setting ErrFile to fd 2...
	I1105 10:25:56.305524   21594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:25:56.305698   21594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:25:56.305876   21594 out.go:352] Setting JSON to false
	I1105 10:25:56.305898   21594 mustload.go:65] Loading cluster: multinode-193000
	I1105 10:25:56.305933   21594 notify.go:220] Checking for updates...
	I1105 10:25:56.306262   21594 config.go:182] Loaded profile config "multinode-193000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:25:56.306284   21594 status.go:174] checking status of multinode-193000 ...
	I1105 10:25:56.306708   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.306750   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.318211   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60086
	I1105 10:25:56.318539   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.318923   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.318933   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.319137   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.319237   21594 main.go:141] libmachine: (multinode-193000) Calling .GetState
	I1105 10:25:56.319332   21594 main.go:141] libmachine: (multinode-193000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:25:56.319393   21594 main.go:141] libmachine: (multinode-193000) DBG | hyperkit pid from json: 21270
	I1105 10:25:56.320762   21594 status.go:371] multinode-193000 host status = "Running" (err=<nil>)
	I1105 10:25:56.320780   21594 host.go:66] Checking if "multinode-193000" exists ...
	I1105 10:25:56.321041   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.321060   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.332057   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60088
	I1105 10:25:56.332410   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.332747   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.332756   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.333002   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.333113   21594 main.go:141] libmachine: (multinode-193000) Calling .GetIP
	I1105 10:25:56.333216   21594 host.go:66] Checking if "multinode-193000" exists ...
	I1105 10:25:56.333499   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.333531   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.344475   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60090
	I1105 10:25:56.344832   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.345168   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.345178   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.345391   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.345498   21594 main.go:141] libmachine: (multinode-193000) Calling .DriverName
	I1105 10:25:56.345695   21594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:25:56.345715   21594 main.go:141] libmachine: (multinode-193000) Calling .GetSSHHostname
	I1105 10:25:56.345799   21594 main.go:141] libmachine: (multinode-193000) Calling .GetSSHPort
	I1105 10:25:56.345888   21594 main.go:141] libmachine: (multinode-193000) Calling .GetSSHKeyPath
	I1105 10:25:56.345983   21594 main.go:141] libmachine: (multinode-193000) Calling .GetSSHUsername
	I1105 10:25:56.346077   21594 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/multinode-193000/id_rsa Username:docker}
	I1105 10:25:56.378030   21594 ssh_runner.go:195] Run: systemctl --version
	I1105 10:25:56.382253   21594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:25:56.393080   21594 kubeconfig.go:125] found "multinode-193000" server: "https://192.169.0.15:8443"
	I1105 10:25:56.393103   21594 api_server.go:166] Checking apiserver status ...
	I1105 10:25:56.393153   21594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 10:25:56.404327   21594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1917/cgroup
	W1105 10:25:56.411881   21594 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1917/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1105 10:25:56.411948   21594 ssh_runner.go:195] Run: ls
	I1105 10:25:56.415051   21594 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I1105 10:25:56.418143   21594 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I1105 10:25:56.418162   21594 status.go:463] multinode-193000 apiserver status = Running (err=<nil>)
	I1105 10:25:56.418170   21594 status.go:176] multinode-193000 status: &{Name:multinode-193000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:25:56.418180   21594 status.go:174] checking status of multinode-193000-m02 ...
	I1105 10:25:56.418434   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.418455   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.429574   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60094
	I1105 10:25:56.429911   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.430303   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.430319   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.430553   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.430658   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetState
	I1105 10:25:56.430743   21594 main.go:141] libmachine: (multinode-193000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:25:56.430813   21594 main.go:141] libmachine: (multinode-193000-m02) DBG | hyperkit pid from json: 21299
	I1105 10:25:56.432176   21594 status.go:371] multinode-193000-m02 host status = "Running" (err=<nil>)
	I1105 10:25:56.432184   21594 host.go:66] Checking if "multinode-193000-m02" exists ...
	I1105 10:25:56.432455   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.432478   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.443450   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60096
	I1105 10:25:56.443795   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.444156   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.444174   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.444388   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.444493   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetIP
	I1105 10:25:56.444585   21594 host.go:66] Checking if "multinode-193000-m02" exists ...
	I1105 10:25:56.444844   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.444880   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.455734   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60098
	I1105 10:25:56.456123   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.456493   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.456506   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.456716   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.456814   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .DriverName
	I1105 10:25:56.456971   21594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 10:25:56.456984   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetSSHHostname
	I1105 10:25:56.457069   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetSSHPort
	I1105 10:25:56.457155   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetSSHKeyPath
	I1105 10:25:56.457240   21594 main.go:141] libmachine: (multinode-193000-m02) Calling .GetSSHUsername
	I1105 10:25:56.457328   21594 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19910-17277/.minikube/machines/multinode-193000-m02/id_rsa Username:docker}
	I1105 10:25:56.493143   21594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 10:25:56.504092   21594 status.go:176] multinode-193000-m02 status: &{Name:multinode-193000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:25:56.504108   21594 status.go:174] checking status of multinode-193000-m03 ...
	I1105 10:25:56.504391   21594 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:25:56.504415   21594 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:25:56.515401   21594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60101
	I1105 10:25:56.515723   21594 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:25:56.516052   21594 main.go:141] libmachine: Using API Version  1
	I1105 10:25:56.516078   21594 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:25:56.516319   21594 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:25:56.516425   21594 main.go:141] libmachine: (multinode-193000-m03) Calling .GetState
	I1105 10:25:56.516522   21594 main.go:141] libmachine: (multinode-193000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:25:56.516593   21594 main.go:141] libmachine: (multinode-193000-m03) DBG | hyperkit pid from json: 21373
	I1105 10:25:56.517943   21594 main.go:141] libmachine: (multinode-193000-m03) DBG | hyperkit pid 21373 missing from process table
	I1105 10:25:56.517989   21594 status.go:371] multinode-193000-m03 host status = "Stopped" (err=<nil>)
	I1105 10:25:56.517999   21594 status.go:384] host is not running, skipping remaining checks
	I1105 10:25:56.518003   21594 status.go:176] multinode-193000-m03 status: &{Name:multinode-193000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 node start m03 -v=7 --alsologtostderr
E1105 10:26:31.155481   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-193000 node start m03 -v=7 --alsologtostderr: (36.194115664s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (192.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-193000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-193000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-193000: (18.875763792s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-193000 --wait=true -v=8 --alsologtostderr
E1105 10:29:34.112095   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-193000 --wait=true -v=8 --alsologtostderr: (2m53.62432379s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-193000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (192.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-193000 node delete m03: (3.090484789s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-193000 stop: (16.654014258s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-193000 status: exit status 7 (98.687299ms)

                                                
                                                
-- stdout --
	multinode-193000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr: exit status 7 (99.033274ms)

                                                
                                                
-- stdout --
	multinode-193000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-193000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 10:30:06.175979   22053 out.go:345] Setting OutFile to fd 1 ...
	I1105 10:30:06.177518   22053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:30:06.177531   22053 out.go:358] Setting ErrFile to fd 2...
	I1105 10:30:06.177536   22053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 10:30:06.177719   22053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19910-17277/.minikube/bin
	I1105 10:30:06.177903   22053 out.go:352] Setting JSON to false
	I1105 10:30:06.177928   22053 mustload.go:65] Loading cluster: multinode-193000
	I1105 10:30:06.177961   22053 notify.go:220] Checking for updates...
	I1105 10:30:06.178279   22053 config.go:182] Loaded profile config "multinode-193000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1105 10:30:06.178302   22053 status.go:174] checking status of multinode-193000 ...
	I1105 10:30:06.178699   22053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:30:06.178740   22053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:30:06.190242   22053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60334
	I1105 10:30:06.190559   22053 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:30:06.190946   22053 main.go:141] libmachine: Using API Version  1
	I1105 10:30:06.190954   22053 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:30:06.191175   22053 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:30:06.191335   22053 main.go:141] libmachine: (multinode-193000) Calling .GetState
	I1105 10:30:06.191434   22053 main.go:141] libmachine: (multinode-193000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:30:06.191500   22053 main.go:141] libmachine: (multinode-193000) DBG | hyperkit pid from json: 21660
	I1105 10:30:06.192635   22053 main.go:141] libmachine: (multinode-193000) DBG | hyperkit pid 21660 missing from process table
	I1105 10:30:06.192665   22053 status.go:371] multinode-193000 host status = "Stopped" (err=<nil>)
	I1105 10:30:06.192677   22053 status.go:384] host is not running, skipping remaining checks
	I1105 10:30:06.192682   22053 status.go:176] multinode-193000 status: &{Name:multinode-193000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 10:30:06.192706   22053 status.go:174] checking status of multinode-193000-m02 ...
	I1105 10:30:06.192957   22053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1105 10:30:06.192978   22053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1105 10:30:06.203893   22053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60336
	I1105 10:30:06.204227   22053 main.go:141] libmachine: () Calling .GetVersion
	I1105 10:30:06.204591   22053 main.go:141] libmachine: Using API Version  1
	I1105 10:30:06.204614   22053 main.go:141] libmachine: () Calling .SetConfigRaw
	I1105 10:30:06.204850   22053 main.go:141] libmachine: () Calling .GetMachineName
	I1105 10:30:06.204959   22053 main.go:141] libmachine: (multinode-193000-m02) Calling .GetState
	I1105 10:30:06.205070   22053 main.go:141] libmachine: (multinode-193000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1105 10:30:06.205134   22053 main.go:141] libmachine: (multinode-193000-m02) DBG | hyperkit pid from json: 21679
	I1105 10:30:06.206240   22053 main.go:141] libmachine: (multinode-193000-m02) DBG | hyperkit pid 21679 missing from process table
	I1105 10:30:06.206285   22053 status.go:371] multinode-193000-m02 host status = "Stopped" (err=<nil>)
	I1105 10:30:06.206298   22053 status.go:384] host is not running, skipping remaining checks
	I1105 10:30:06.206302   22053 status.go:176] multinode-193000-m02 status: &{Name:multinode-193000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-193000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1105 10:31:31.244130   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-193000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m40.002287471s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-193000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (100.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (160.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-193000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-193000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-193000-m02 --driver=hyperkit : exit status 14 (486.456694ms)

                                                
                                                
-- stdout --
	* [multinode-193000-m02] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-193000-m02' is duplicated with machine name 'multinode-193000-m02' in profile 'multinode-193000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-193000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-193000-m03 --driver=hyperkit : (2m34.557554608s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-193000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-193000: exit status 80 (326.513732ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-193000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-193000-m03 already exists in multinode-193000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-193000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-193000-m03: (5.277784281s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (160.72s)

                                                
                                    
x
+
TestPreload (140.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-869000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-869000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m10.99024572s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-869000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-869000 image pull gcr.io/k8s-minikube/busybox: (1.48348665s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-869000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-869000: (8.380883791s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-869000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1105 10:36:31.248809   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-869000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (54.419692207s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-869000 image list
helpers_test.go:175: Cleaning up "test-preload-869000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-869000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-869000: (5.263494029s)
--- PASS: TestPreload (140.71s)

                                                
                                    
x
+
TestSkaffold (114.57s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3866186936 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3866186936 version: (1.63593131s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-846000 --memory=2600 --driver=hyperkit 
E1105 10:39:34.123497   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-846000 --memory=2600 --driver=hyperkit : (36.089294061s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3866186936 run --minikube-profile skaffold-846000 --kube-context skaffold-846000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3866186936 run --minikube-profile skaffold-846000 --kube-context skaffold-846000 --status-check=true --port-forward=false --interactive=false: (56.269567991s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-577566bc8c-hzz8d" [3a475cc3-a2bb-43ab-81f2-1a1279a9dcba] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004526294s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6b5dcfdc8f-7lntx" [bc190e1f-dc65-4010-ab47-ad72696509ec] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003834041s
helpers_test.go:175: Cleaning up "skaffold-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-846000
E1105 10:41:14.339622   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-846000: (5.280698234s)
--- PASS: TestSkaffold (114.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.07s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2422525715 start -p running-upgrade-379000 --memory=2200 --vm-driver=hyperkit 
E1105 10:54:17.250218   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
E1105 10:54:34.172516   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/functional-142000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2422525715 start -p running-upgrade-379000 --memory=2200 --vm-driver=hyperkit : (59.274931665s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-379000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-379000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (23.759146715s)
helpers_test.go:175: Cleaning up "running-upgrade-379000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-379000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-379000: (5.267339347s)
--- PASS: TestRunningBinaryUpgrade (90.07s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19910
- KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current355816057/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current355816057/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current355816057/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current355816057/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19910
- KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2654643787/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2654643787/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2654643787/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2654643787/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
E1105 10:41:31.256109   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/addons-133000/client.crt: no such file or directory" logger="UnhandledError"
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (126.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3607975801 start -p stopped-upgrade-588000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3607975801 start -p stopped-upgrade-588000 --memory=2200 --vm-driver=hyperkit : (39.117714194s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3607975801 -p stopped-upgrade-588000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.3607975801 -p stopped-upgrade-588000 stop: (8.295391392s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-588000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1105 11:19:02.326584   17842 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19910-17277/.minikube/profiles/skaffold-846000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-588000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m19.046574843s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (126.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-588000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-588000: (2.577530623s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (525.998418ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-081000] minikube v1.34.0 on Darwin 15.0.1
	  - MINIKUBE_LOCATION=19910
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19910-17277/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19910-17277/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-081000 --no-kubernetes --driver=hyperkit : (22.143316126s)
--- PASS: TestNoKubernetes/serial/Start (22.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-081000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-081000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (151.310313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-081000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-081000: (2.457738734s)
--- PASS: TestNoKubernetes/serial/Stop (2.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-081000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-081000 --driver=hyperkit : (21.093153877s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-081000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-081000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (160.71303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    

Test skip (20/221)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard